Reorganizing Federal IT to Address Today's Threats

August 11, 2011

New reports show U.S. government servers are faced with 1.8 billion cyber attacks every month. A quick look at these numbers and it is painfully obvious that status quo security measures are not keeping pace with today’s threats. Congress has taken a step by introducing the Cyber Security Public Awareness Act of 2011, but more evolution of our cyber defenses needs to occur.


Sam: Hello everyone and welcome to this Lumension Webcast titled Reorganizing Federal IT to Address Today's Threats. My name is Sam Murton [SP] and I'll be your moderator for today's event. New reports show that U.S. Government Servers are faced with 1.8 billion cyber attacks every month. And a quick look at these numbers, it's painfully obvious that the status quo security measures are not keeping pace with today's threat. Congress have taken a step by introducing the cyber security public awareness Act of 2011 but more evolution of our cyber defenses needs to occur.
With us today to discuss this topic, we're lucky to have two panelists who have a wealth of experience in the Federal space, Richard Stiennon, Analyst with IP Harvest and author of, "Surviving Cyber War." And Paul Zimski, VP of Solution Strategy with Lumension. Welcome gentlemen and thank you for joining us today.
Richard: Thanks Sam good to be here.
Paul: Thank you. 
Sam: In today's webcast, Richard and Paul will examine today's threats targeting Government IT Systems, how federal IT Departments can be reorganized to improve both IT Security and Operations and what key endpoint security capabilities should be implemented. We'll continue, we'll conclude with a question and answer session with both Richard and Paul. 
But before we get to Richard's presentation, I would like to ask the audience a first polling question which is, do you have a strategy to counter advanced persistent threats or APT's? A is, "Yes." B is, "No." C is, "Unknown." And before we dive into the discussion, we have a few housekeeping items to go through. At any point during this webcast, if you have questions, please submit them via the questions tab at the top of the screen and we'll try to answer as many as we can. Please make your, as we're waiting for the polling queue, please make your selection and we will take a look at the poll right now.
So, thank you for making your selections. Looks like 33% say, "Yes, we have a strategy," 22% say, "No," and 44% say, "Unknown." Richard, Paul, any thoughts to this? Is this a surprise? Is this sort of normal from what we're seeing in the market today?
Richard: Yup. This is Richard that would be what I would expect. Those that have either been targeted by APT and know it, probably already instituted a strategy and that 33% falls into the realm of number of organizations that have been attacked. So, I think it's being reflected. So, it's good for the other ones to start thinking about having a strategy and maybe they'll actually be ones that never get attacked.
Sam: Sure, great. Thanks Richard. And with that, I will turn it over to you to start our presentation.
Richard: Great. Thanks Sam. I guess I should modify my license by saying, "Never get successfully attacked," because seems like everybody within the Federal space is under a constant barrage of attacks. So, let me get my first slide here, there. Great. Dark and stormy forecast for Federal network. I have been tracking the space for 11 years I guess and it is only in the last two years that I've started to see responsiveness coming out of both federal IT as well as, you know, Defense Contractors and the rest of Service of a Federal space. 
And it was two and half years ago that I started to hear from industry, "Hey, something happened." All of a sudden, people in Federal are getting it. They're understanding the need for deploying widespread security defenses and something must have happened. And maybe I'm still searching for what that something is but there are some high watermarks. And you know, I think Buckshot Yankee is considered by many, of course, System, Secretary of Defense, William [inaudible 00:04:27] pointed to it originally as a wake-up call that the Pentagon had. And of course, this was the case of evidently foreign agents, infiltrating Military Operations with a USB thumb drive borne malware that spread widely around the Pentagon's secret network. And the reaction to that was a bit draconian and quite an amazing measure in that, there was a lot of ,basically, going through entire Pentagon's networks and systems and re-imaging and cleaning .Very, very expensive nine-month project which was of course labeled Buckshot Yankee.
But prior to that, we had, you know, complete ownership of Pentagon email servers for the undisclosed amount of time or maybe unknown amount of time which is even more frightening. And claims of at least 100 million dollars spent, just to clean up that incident in 2007. And I think that is really when the wake-up call happened. That's when we started to get lease inside the Pentagon and other Federal systems became aware of it. As now it said, "Uh-oh, we are really being targeted." It's not just the random hackers in England who are looking for evidence of alien investigations, insider networks, it is our adversaries who are exfiltrating data from our networks.
And then of course, this year has been, I think developmental for the Federal space as we watch some extremely sophisticated targeted attacks. I think the attack on RSA is a high-water mark for this level of targeting. So, the classic apt analogy was used, emails were sent to apparently random employees inside RSA from spoofed header addresses. They'd indicated they're from the HR department, people opened them up, became infected with Zero Day Malware. Zero Day Malware will eventually get on the machine on the person who had access to the trove of RSAs, very, very secret feeds for all of their secure ID tokens.
Well and good, when the announcement was made, you know, at least the public discourse around it was, "Hey, you know a major attack and breach occurred against probably, you know, what we thought was one of the best secured pieces of data, these feeds, but no warnings of what the repercussions could be because the thought that somebody could actually execute and be able to break in to an organization based on knowledge of those feeds, seemed quite an extrapolation. Well behold, within a month, we had attacks using these feeds against Lockheed Martin, Grumman, L3, who knows how many more, not necessarily successful attacks or at least these guys were claiming that they were not successful but we know there was at least a 24 hour period where access was granted using secure ID tokens.
Coincident with that, we've got some other types of attacks against the International Monetary Fund, the ex filtration of data from that. We've seen yet another period of a hacker underground activity with Mal Service attacks against and exfiltration of some usernames and passwords from No question that we're entering a level of heightened awareness. We had DEF CONS for this, I would put us at, you know we should probably be at DEFCON 3. 
Well, it's been my contention that something needs to change. We have to create a change environment and not one that says, "The change comes from better security awareness," even though frankly, I guess, that's what I do for a living, is tell people about the dangers that they face. The change has got to come from somewhere, there's obviously a lot of technological changes available, there's technology there to address every threat that I've seen. And yet, they don't get deployed, there's not the motivation to deploy them. So, where else can we look for change and I believe that change can come from organizational changes or at least they could be a first step.
And this is a bit of a trial balloon, how do we reorganize Federal IT? So, you'll get my contact information in one of these slides or we can give it to you and it's basically [email protected], feel free to give me feedback if you think… talking completely blue sky here but this is where I'm heading with my thinking. And I advocate bottom-up rather the top-down change. And unfortunately, in a large democracy plus large bureaucracies layered on top of that, the tendency is to look for top-down solutions. Things that you know, you appoint a leader and you appoint a new organization such as Cybercon to take care of this or every time we enter a new level of threat, we seem to reorganize at the top. So you know, we have Intelligence Organizations who are operating well but then we had to put them under a Directorate of Intelligence. And we had you know, domestic and international security organizations, we had to put them underneath an overarching DHS and that is just the way democracies work.
The Pentagon is just published their strategy for Operating in Cyberspace. I believe it's yet another example of top-down strategy documents, start at high level and the hope is that eventually through Committee Meetings and Inter-agency Meetings, we'll come up with the details down below. And I suggest that we're gonna get similar results so we did two comprehensive National Cyber-security initiative with the follow-on Presidential Directives supporting it in the Cyberspace Policy Review and the Obama Administration. Basically, the same results will be, we're going to continue to have attack. So, they're successful against federal systems and we're just not gonna move forward as quickly as we can.
And here's what I came out of the Pentagon Strategy for Operating in cyberspace and a demonstration of how high-level it is. So, a strategic initiative will go into how I don't believe these are all completely strategic is to treat cyberspace as an operational domain. It just is gonna be the direction that the Pentagon is gonna take for a number of years here until something changes. Employ new defense operating concepts to protect DoD networks and systems. So, this is the closest of these initiatives to something practical. We know underlying that is this idea that we're gonna deploy intrusion detection within DoD. Partner with the U.S. government departments. Yeah, build robust relationships with U.S. allies and international partners. I actually believe the State Department is making some pretty good headway in these areas by pointing cyber security people to the UN. And then leverage the nation's ingenuity. And that's gonna be the public-private partnership that we see come out of every strategy document.
Well, I don't believe that you can actually have a strategy unless it shows you also respect assigned responsibilities and that is, you know, if you've seen major strategy documents over the years, there's a group or an organization or person who's assigned responsibility. And my take on where organizational changes have to go is within each organization, within each of the organizations probably represented by the audience on the phone today, if you have under you the IT department and you've got a security department, here's my suggestions. That we're gonna create a separate unit to address targeted attacks. 
The operational capabilities of adversaries have changed what they're doing and the targets that they're after have changed, it's not all about credit cards anymore. They're after intellectual property at all different levels. And I think we have to have a change in our organizational capabilities to respond to that. So, let me introduce to you the cyber defense team. The reason that it's a cyber defense team is that it acknowledges that there are targeted attacks going on against the organization. And then I've been lecturing on targeted versus random attacks for 10 years now. And when I think back on it, 10 years ago, it was all about random attack. The targeted attacks were prior to that, back before we had the internet, before we had cyber criminals and spyware and spam that was providing funding to people. But now it's come full circle and targeted attacks are the new concern.
In this very simple structure, we're gonna have a cyber commander and underneath the cyber commander, will be three departments with people with separate duties. They're gonna be Analysts, Operational Defenses and Red Team. And they've got to basically a check and balance to the way they operate. Cyber commander, I pulled out of your existing IP security department. This is the person who assigns and directs roles under him, make sure the correct tools and defenses are deployed. Obviously, it's got to be funded with a budget, puts in place controls and audit process feeds so that you can start to get your hands around the metrics associated with defense. You can't have a department and budget and people that are responsible for something without also measuring whether they're there.
And the key to doing this, I believe, is that we are re-instituting this idea that there's, you know, one throat to choke if a major event happens. The cyber commander has to take on the full-time responsibility for preventing these major breaches and, you know, when there is a breach, there's got to be somebody responsible for it and not the fallback position that most organizations appear to be taking right now is, "Hey, we were following industry best practices," or "We survived their last audit and we are in compliance with, in commercial world , it might be PCI." And that is not an excuse for succumbing to these types of attacks. You should have fore knowledge that your targeted, you should know what is being targeted and you should have defenses in place to prevent those.
Reports Stopper Management. So, the Cyber Commander is gonna be the one providing to CIO or the CISO across the channel where they stand and what the posture is on a weekly, if not daily basis. And also, a primary point of communication to law enforcement and intelligence agencies because these attacks on the information associated with them is gonna be of extreme interest to those two groups. Underneath the Cyber Commander, our Analysts and this is probably the part that is most different from what already exists inside most IT security departments. In every IT security department that I've worked with, there are individuals there that take upon themselves to be the Analyst. So, they're looking out and they're going to conferences and they attend webinars and they try and stay on top of it but it's not their primary duty and it's not their assigned duty. And it tends to, you know, during busy periods, end of quarter, during the audit, tends to fall by the wayside. So, he's not necessarily always on top of the situation. In other words, their situational awareness is not there, it's not keyed in. This is a full time job. This job has to be done around the clock. They have to understand the state of art in attack methodologies. They have to get to know potential attackers and monitor their activity.
Now, this can be outsourced, there are plenty of private organizations that provide this sort of intelligent cyber intelligence activity but you should have people on the inside, just like whenever you'll outsource anything, you have a local expert monitoring the outsourcer. And you should have these analysts that are you know, leading, working with each other, ultimately working with other analysts in other organizations and creating that information sharing at their level. They monitor known attacks sources, they build the lists of known attack sources, basically keeping IP addresses and domains and tracking them. There's a lot of intelligence we gathered from the creation of new domains and you can see them being created and you use those in the tools that I'm going to talk about in a second.
And then, communicating the threat level to the rest of the cyber defense team. They don't need to be broadcasting counseling to the organization that, "You're at heightened defense level, at more adversaries." Once a particular design of a particular weapon system is being targeted at one company, it could be targeted at theirs. And they have to make the appropriate decision about what sort of level of alerting and notification has to go on. And they insist in evaluating technology for internal deployment by these cyber defense operations team.
These guys don't do daily patch management. They don't do daily antivirus detection and cleanup of [inaudible 00:18:52]. So, that's what your security department today is supposed to be doing and probably does very effectively, it's the stuff that gets past the regular operational security people that these guys going to look after.
So you're gonna select and deploy new tools, these tools, you know, to detect the existence of a advanced persistent threats inside the organization. They're gonna look for beaconing that's going on inside the organization, going outwards because they've got that list of potential sources of attacks. If any communications or any device connects with those, they're gonna be alerted, they're gonna find that machine, kick off the process for forensics and then cleaning up. They're gonna look for those internal infections, they're gonna look for internal abuse. So, they look for insiders that are downloading massive multiple data to CDs for instance, like you know, what I'm referring to in the WikiLeaks case. They're gonna deploy data loss prevention, style tools for detecting and mitigating the effects of internal, essentially, espionage agents. And they're gonna monitor that and report on that, then they can build up the defensive capability inside the organization.
Okay. So, you got Analysts evaluating the outside threat, you've got Cyber Defense Operators countering the threats both on the inside and the outside. One more thing that you need I believe is a Red Team. So, most of organizations have a Red Team but I believe you have to have a full-time Red Team so that the rest of the IT security operation and frankly, rest of all the users that know that these guys are there, realize that at any point, they could be targeted by this Red Team. They do attack and penetration test using latest tools. They use APT's to do it and not just scanning the network with network scanning tools, looking for vulnerabilities, they are attempting to launch APT attacks against insiders and reporting internally when they succeed at that. 
They thrive on social engineering so they basically create this element of not… I don't want to say fear but they put everybody on their toes. Any call asking for username, password or information of any sort could actually be the Red Team and you know that the only repercussions of failing a routine attack might be embarrassment but that's quite a motivator inside most organizations. 
So if we've established that, I'm talking about at any level. So, down a small department level, anywhere where today there's a Standalone IP Security departments, I'm advocating that there be a cyber defense team and the next step is to move it throughout the organization. So, you've got similar cells of cyber defense teams in each level of the organization. And ultimately, if we went up and up the ladders then we'd be creating this overarching cyber command that would be a hierarchy of, really, IT professionals but all with the same task, to block those targeted attacks.
Elements of a defensive strategy to wrap up and I guess respond to what we've been seeing coming out of Washington lately and I think that hopefully will come out of the next level of strategy is down at this low level and that on the network side, you have to be doing complete packet inspection inbound and outbound. This goes way, way beyond ideas and the various versions of Einstein that are being deployed because it's not good enough just to say, "Hey, a connection came from here to there and it had matched this signature." We gotta have all of the data and we've got to be able to act on it and we gotta be able filter it. And endpoint servers, desktops and embedded systems, I believe the time has come to move to whitelisting. There are many, many systems that don't have to be generic, all purpose operating systems with infant capability to run every single application which was what the internet was built on. You know, it was built on all these devices that were connected with open standards and basically able to run anything at the user's discretion. Those days are over, it creates way, way too much risk and way, way too much opportunity for attack.
So whitelisting is in order of the date. Whitelisting down to the level of having a different operating system for a different function, I love to get out my whip and crack it at people who manufacture medical equipment because to me, it's the greatest failing in thinking ahead to use these standard operating systems. I'm talking about Windows obviously. And life-support systems for instance, you know, you don't need a device that needs to be patched every month to run, a life support system. Same goes for vehicles, embedded systems that you use for controls as happens in hospitals, and prisons, and ships, and of course, critical infrastructure. That's where platform diversity comes in. Don't make it easy for somebody to use the same attack methodology that comes in on a laptop, they can also move through the network and get on to a pumping or control station.
And then, the last level is user behavior monitoring. Right now, you're looking at everything that the users do on the network and ultimately build up a picture of what's normal so that you can alert the Red Team or the operational guys when something abnormal is happening in your network. To sum up,the attackers have changed their tools, targets and goals. The defenders must change too. I'm gonna hand it back to you Sam.
Sam: Thanks Richard. Before we get to Paul's presentation, I do have another polling question I'd like to ask the audience. What is your primary defensive end point technology? A, a virus. B, HIPS. C, Firewall. D, Application Whitelisting or E, Patch Management. Please select the option that best suits your environment and we will share the results out after Paul's presentation. And with that, Paul, I will turn over to you.
Paul: Great. Thanks, Sam. So, I'm gonna switch gears a little bit and talk about some of the key things we can be doing from an Operations perspective or a blue team perspective, if you will, on endpoints in our environment for, you know, dealing with advanced persistent threats. And these new threats that are out there. And I'm going to key in on three defensive strategies. And some of these may be new, some of them are older [inaudible 00:26:21] but I think that they're all very important.
And one is you know, to really implement a defense-in-depth approach to endpoint security. Second is to shift from a threat centric or solely a threat centric mindset and technology and tool-set, into developing more of the concept of trust or trust based environments. And then lastly is, you know, really building a bottom-up approach to what I call operational excellence. And that's really, "Are we doing the things that we know we should be doing, the low-hanging fruit? Are we maintaining our systems according to best practices and really doing everything we can to mitigate threat?"
And so, from a defense in depth perspective, you know, there is no one technology that's going to protect you especially against APT's. When we look at APT's, they tend to be very blended including social engineering, possibly zero day attacks, attacks on endpoints or workstations and users as opposed to just brute force on servers. So, there's no one defensive layer really that's going to be able to stop an APT. And what we need to do is really get out of the mindset of just putting in or trying to defend against what we think is bad and moving into more of a layered approach where we do everything from operationally, remove as many vulnerabilities as we can, define trusted environments. We're still running antivirus and HIPS and all the tools that we have come to rely on but we're doing it in a layered approach. And understanding that as many gates as we can put forward that someone has to hop through, the better off we are at protecting our endpoints.
And to spend a little bit of time on moving from this threat centric mindset, it's really easy to get caught up in what's coming at you and what's attacking you. And I think that that to an extent, that's important and we need to have our Analysts out there, looking at the cutting edge, understanding how we're being targeted. And we need to have defensive layers that are actually looking for known threats. 
But the issue with a threat centric approach are only trying to understand what we can identify as potentially bad is all the zero day attacks, all the novel attacks and these APT's can slip through because we just don't know how we're gonna be targeted beforehand despite our best efforts. And so, moving to a more trust centric model, you know, we still want to look at changes and new code that's being introduced to systems and ask, "Hey, do we know if this is known bad? Is this some sort of payload looking type of code?" But we also want to start asking, "Hey, do I have a reason to trust this? Can I explicitly say that this came from a known provenance or known source? Or do I know what program's trying to install it? Or was it signed by a vendor? Should my user even have the ability to install anything new on their systems?" These are a lot of questions that we don't programmatically ask as new codes' being introduced to our endpoints. And these are the sorts of things that can really start to help defend against the unknown and defend against the APT's that are out there.
And so, as sort of at the core of developing a trust-based, at least, application model on our endpoints is whitelisting or application whitelisting. And by this, what I what I'm talking about is, "Hey, can I define a known good state or an environment for my endpoints that any new change that's introduced, has to be vetted through these sorts of interrogations. Where did this come from? Who signed it? Should my user have this, etc, etc." And what we end up having is an environment where a certain amount of our applications are authorized because they were pre-existing or they are trusted explicitly via a hash digests. 
And so, our users have access to the tools that they need to perform their jobs but by questioning any new changes that are introduced onto our on tour endpoint, what we end up doing is by default, protecting against unknown and known payload types because we just don't have a reason to trust them. 
We also protect against unauthorized software. So, may not be malicious in itself but it's something that we don't want running on our systems, it could be legitimate software that we just don't support or it could be software that could potentially be opening up tertiary vulnerabilities for attackers to exploit. And so, the concept of this trust based environment for an endpoint doesn't necessarily have to be draconian or all or none. There are certainly assets that we would want to ensure that no change could possibly occur on them without a known or approved maintenance window and very tight change control processes. But there's ways to introduce these trusted environment in a more flexible manner where unless critical assets and infrastructure per se, where users would still have access to the tools they need to perform their job on an on demand sort of way, where you wouldn't have to necessarily pre-authorize applications or update the applications for them to occur.
But you're still vetting trust through policies. And so, this is the concept of, "I'm gonna trust maybe a certain applications in my patch management software to introduce change, I'm gonna trust certain application vendors that have signed their code and I'm going to put my face into those digital certificates. I might trust some sort of location where code is coming from or I might just give users certain power users the ability to install new software with my ability to revoke that if I disagree."
And so, with all this flexibility, you know, every one of these flexible policies introduces or changes the balance between security, flexibility and risk. And so, there would be some assets in some environments where this would make sense in and somewhere, extending this kind of flexibility just wouldn't make sense whatsoever but a much stronger security model would be favorable. But the point is that there is flexibility and there are options as far as being able to define this trusted environment, where it would be much harder for an attacker to, say, exploit a zero-day vulnerability because at the end of the day, what they're probably trying to do is install code via their exploit.
New code that's not recognized, that didn't come through some sort of vetted trust vector or mechanism isn't allowed to run. And so, this is this is really that example of turning the model or the endpoint security model in on itself. And instead of trying to identify whether something's bad is if we can just say, "Hey, if I don't trust this, it can't run," what we end up doing is disrupting many attacks that are novel and unknown.
And so, a third thing beyond this, putting a layered defense and investigating a trust based model for endpoints as opposed to just asset based model, is just what are we doing from an operational perspective right now and are we doing it correctly, do we have operational excellence in our organizations? And you know, there's varying degrees of success with this process. So, what I'm talking about are things like vulnerability scanning whether it's from the outside, with the Red Team or using administrative scanning tools within the network on the Blue Team, patch management, configuration management, just the way that we're maintaining and the way that we're maintaining our end points from a day-to-day perspective. Do we have SLAs or goals in place for how fast patching or vulnerabilities can be shutdown? Are we ensuring that we don't have configuration drift and are we just making it… Are we making sure that all the low-hanging fruit in our environment is taken care of and that we're not going to be susceptible to an attack because we took our eye off the ball of the day-to-day stuff that we should be doing because we're trying to keep up or concentrate on all these new and novel attacks that are out there.
And so, some of the examples of this low-hanging fruit would be, "Have I ever removed unwanted applications in my environment? Are there applications that are out there that I don't support or there's no need for them to be on systems or maybe they are introducing vulnerabilities? What am I doing to ensure that I don't have things on my endpoints that shouldn't be there?" And then another obvious one is reducing the local administrators. So, if you don't need to have local administrators in your environment then by all means, remove them. And this is something that that is still fairly persistent today in environments that we see as a company.
And then a last one that I don't really have a slide for would be that, I mentioned before, is really, that concept of are you patching and configuring your systems correctly? And this isn't, you know, terribly exciting technology, none of this is at this operational perspective but these are the things that we do to really reduce the… it's one of the most effective ways we have to reduce the exploitable surface area on our systems and to make sure that we just aren't giving attackers an opportunity that they shouldn't have. And especially when we look at the blended nature of APT's, the social engineering that's used, the fact that users and their endpoints are being targeted as well as, or at least as a foothold, with the secondary targets being the more back-end infrastructure that holds the intellectual property. Are we making sure that we're making life difficult for attackers?
And a lot of this comes from just the day-to-day operations in our environment. And that concludes my part of the presentation. I'm gonna hand it back over to Sam for some live Q&A and I believe also to go through the polling question.
Sam: Thanks Paul. Yes, I do have the results from that second polling question so we'll share those out now. The question was what is your primary defensive end point technology. And of the five choices, 50% said antivirus and 50% said Firewall. Paul and Richard, any thoughts here, to this? I don't think that's a surprise really but any thoughts here?
Paul: Yeah, sure. I have some thoughts. So, I think antivirus and firewall technologies are some of our, it's a very go to standard for defending endpoints. And there's merit in both and there's certainly protection afforded in both technologies. But what we've seen is that these technologies, both antivirus and firewalls, while they're needed and they're part of that defense in depth, sort of, security stack on our endpoints, they're really, well, fully in it, in stopping novel attacks and stopping sophisticated attacks. When you look at antivirus, while it's something we should have and it's something that does provide value, you know, it's not going to stop anything that it doesn't know about already. And I think this is a reality that everyone understands. I don't want to, you know, bash antivirus, it's just not living up to the demands of today's attack environment. When we look at it [inaudible 00:40:09]
Richard: [inaudible 00:40:08]
Paul: Go ahead.
Richard: Now, you got doors and windows on your house but you don't have locks on them yet.
Paul: Right. Exactly, Richard. And when we look at firewalls, it is important to have that Desktop and Gateway firewalls to do both the infiltration and exfiltration analysis that Richard was talking about as well as just to stop connections that weren't initiated by an endpoint. That's a good idea to do that but again, when we look at the ways the APT's are working, someone sends you something, an attachment in an email or maybe it's a client-side, a browser attack, these are all coming through on ports that we basically trust and have open at least at our endpoints.
And so, those are sort of my thoughts around these technologies. Yeah, it makes sense that these are the technologies that are in place but there's also some shortcomings in relying on just these technologies to prevent today's attack vectors. 
Sam: Great. Thanks Paul. Before we get into Q&A, I do have one final polling question for the audience and we'll share the results out during Q&A. Also, if you have questions, please submit them via the questions tab at the top of the screen and we'll try to get to as many as possible. So, with that, here's the last polling question for today's webcast. What is your average time to patch for critical Microsoft vulnerabilities? Is it 24 hours, one week, three months, three or six months or greater than six months? Make your most appropriate selection and we'll share those results out shortly. 
Also, before we get to Q&A, I just want to make our viewers and listeners aware of some resource that Lumension has put together. Lumension's created a resource center called, "Putting Cyber Security Plans into Action." And within that Resource Center, a lot of educational resources and tools to help your planning effort Sam. Lumension also has some free security tools at the link on the screen here, which are scanners to help you quantify your risk. We have three different types of scanners. There's a vulnerability scanner which helps identify vulnerabilities in your network, an application scanner which identifies, discovers all applications, whether they're unwanted or malicious or what-have-you and also, a device scanner to help discover removal devices such as USB sticks that are on your endpoints. And there are also some white papers specifically created for this audience as well. So with that, we'll now get into the Q&A part of this webcast. And I'll share the results out of that vote shortly. So, please make your polling selections now.
So the first question, Richard, I think I'll pose this one to you. So, how can you develop effective security if responsibility for security is divided amongst disparate groups with no group or entity in a position to oversee the whole or enforce or take care of threats?
Richard: Yeah, great question. I must admit to being naive first time I gathered by, into the Pentagon, thinking that, you know, here in 2003, I would be entering a bastion of IT security and see it all done the way it's supposed to do, no one wanted access and you know everything monitored. And boy, was that a wake up for me to realize that at least at the time, the Pentagon was represented by just amorous of burocracy and conflicting responsibilities in one group. So, one service would run the network and other service would run the servers and other service. I mean, these are branches of a military who have got their long-standing traditions of battling with each other as well.
And basically, that ended up with no security. So, back in 2003, there was no firewall rule for blocking telnet access because there's all sorts of applications that required telnet and administration that required it. And even to the state, I find that sort of mentality going on. So listen, either you're going to assign central responsibility for cyber security. As I'm telling you now, the commercial world did ten years ago. Inside the commercial world, you know, the Ford's X [inaudible 00:45:04] of the world, people have already come to grips with, "Oh my god, these security guys don't let us do what we want to do." And they've already gotten past that. They've learned how to work with security. They've learned how to deploy new applications and new businesses online working with the security guys.
I guess it's draconian, it feels like you've got a Gestapo looking over your shoulder but it's what was required for business to continue. That's gonna happen eventually and you're going to either do it now proactively or you're gonna do it a year from now, after a major breach, a major loss of information to our nation-state adversaries. I suggest that it's cheaper and better to do it beforehand. Okay, the other question is lined up. Oh good, you still there Sam?
Paul: I think we might have, may have lost…
Sam: I'm just [inaudible 00:46:14]. Sorry gentlemen. So, we're just with just queuing up the responses to our polling question. What is your average time to patch for critical Microsoft vulnerabilities, 75% said, "One week," and 25% said, "24 hours," which is I think better than we may be seeing in the commercial side. Paul do you have… I know you have some experience on that in terms of working with some of Lumension customers before they implemented our solutions. Thoughts on this?
Paul: Yeah. I actually think these are really aggressively good numbers. So, 24 hours to a week of patching critical patches, I'm assuming with testing potentially in diverse large environments, these are these are actually on the operating excellence sort of spectrum, high marks so kudos to everyone on the call. And think that this is really exemplary and really shows that this is something that we can be doing and certainly are doing, at least for the members on this call, to make sure that that exploitable surface area is as small as possible in reaction to new vulnerabilities. So, almost surprising that this is where everyone is, that we don't have a broader spread in this poll results for me. So, congratulations everyone. 
Richard: Yeah so, let me add this caveat though. Yeah, that's fabulous, that shows tremendous operational achievement over the years because best practice for me has always been the weekend following Patch Tuesday, with the top organization of the world being able to do patching in 24 hours. But I've got a presentation I give on the Weaponization of Software and in it, I've identified one of the looming threats yet to appear on the rise even though there are examples of this being done on Ericsson switches in Greece. And that is the software update as vector. So imagine, you know, might not be Microsoft but it could be one of your suppliers succumbing to an advanced persistent threat, RSA succumbing to such a threat and their update servers being modified so that you are getting digitally signed, legitimate updates that are introducing new vulnerabilities into your systems.
So in that case, being the first patch and being good at update might actually be opening yourself up and it just gives you something to keep you awake at night thinking about that.
Paul: That's a very good point Richard and that hopefully, we will not see that anytime soon but definitely a reality, potential reality.
Richard: Yeah.
Sam: Thanks gentlemen. So, another question just a couple of folks have asked if we'll get a soft copy of the presentation. Lumension will send out an email to all of today's attendees and registrants. And in that, we'll have a link to the archived webcast as well as a link to these slides. So, you will receive that information within the next couple of days.
Another question that came in is around whitelisting, it was discussed in this presentation today and I guess the question here is asking more of how that would, how does that deter an APT, for example, compared to some of the other technologies in place today? Paul, guess I'll put that one to you first.
Paul: Yeah sure. I'll take that. So, with the way application whitelisting works, you're defining a trusted set of applications for your endpoints and you could also be defining a trusted set of scenarios for how new applications are introduced. And so, when we look at an APT type scenario and let's just assume it's a worst-case type of scenarios, it's an unknown vulnerability or unknown attack on an endpoint. So, a zero day exploit trying to drop a customized payload. And so with that, we would have no patch that could stop the zero day exploit because it's zero day, we don't know about it yet or it's not been disclosed to the general community.
And we've got a payload that's not going to be identified by our antivirus technology because we've never seen it before, it's something custom. And let's also say that we've got an endpoint desktop firewall but lo and behold this came in via an email client or a malicious web page. So, we aren't going to stop it on the firewall either.
So what happens, where application whitelisting is so promising for this type of scenario is that no matter how that endpoint was attacked, and what exploit was used, and on what port and what payload was dropped. That payload is going to be new, it's gonna be something that's new to the environment or new to the endpoint and what application whitelisting does say, "Oh, aha! Something new has been attempting to be dropped onto the system and execute it." And if this wasn't here before, should I trust it. Is it on something that…is it on a list of things that I should trust or was it introduced by a mechanism that I should trust.
And if neither of those scenarios are true then that new payloads blocked from being executed and it wasn't blocked for being executed because you know it was bad, it was blocked for being executed because you can't prove that it was good. And so, it turns that home model in on itself and it makes it much more difficult to attack an endpoint with any sort of traditional method.
And so that's the real promise behind application whitelisting or the concept of defining a trusted environment for your endpoint. It's not to say that other technologies shouldn't also be there, again, it's a defense-in-depth approach, any technology is defeatable, there's no 100% silver bullet. But it is something that really throws a monkey wrench into the way that systems and endpoints are typically targeted today. Richard any additional thoughts?
Richard: Yeah to me, it's taking the old motto for, or the mantra for firewalls which is, "Deny all except that which is explicitly allowed," and moving to the endpoint. So, it represents significant hardening of the endpoint over build model and for APT's, the attacker does some preliminary work, they probably know what's operating system, what level of operating system you have, they design a piece of malware for that operating system and they do their in-depth research to figure out who's most likely to open an attachments and what the best sending email address is.
And it could be trusted outsider or trusted insider that's sending that email so they've targeted all that and it's gonna open and the guy who opens it, is not gonna notice that he got infected unless you're doing whitelisting and whitelisting will just not allow to run because it's something that wasn't pre-approved.
Sam: Great. Thank you, gentlemen. We have time for one more question, this one says, "Would it be considered dangerous to become one vendor centric for all our networking and security needs?" Paul, I'll put that one to you first and then Richard, you can chime in.
Paul: Actually, I think I'll pass that to Richard just so it's a little bit more non-biased response because I am the vendor on the call. So Richard, have at it.
Richard: Yeah, but you're not IBM so you know [inaudible 00:55:08]. Your customers will say, "Buy everything for love." And I think, certainly there is danger because large vendors who wants to get to a certain point, tend to engage in practices to get this lock-in right. So, everybody's familiar that because they probably, you know, use Cisco gear so they've seen it over and over right, it's like, you got to buy Cisco Networking gear because nothing else will work with it.
So there is that danger. I think from a security perspective, it is okay to use the same vendor for applying security to the same medium. So in other words, for the network, it's okay to have one vendor that does all of the network processing in order to block targeted attacks. It's not to say you're not gonna need something else for your email server, which is pseudo network, and then put endpoint protection. It would be great if you could have a single vendor solution because it can be a lot easier to manage and therefore, a lot less things falling through the cracks.
But invariably, you're gonna need different vendors for different Operating Systems. So, if you're mixed environment, as I think you should be, you might end up having different vendors. So be aware of the danger of vendor lock-in but don't complicate things overly. Just introduce new vendors into your organization.
Sam: Great. Thank you, Richard. And that we concludes our Q&A session, we're running out of time and coming up on the top of the hour. So with that, I would like to thank both Richard and Paul for sharing their insight with us on today's event. Any questions that we were not able to you get to today, we can answer and we'll respond via email. Included with that will be recording of the webcast that we will make available online shortly via the Lumension Security website at We will also send the link to the archived webcast as well as the presentation slides via email shortly as well, once those are finalized on our server.
For more information on Lumension Security or any of Lumension solutions, you can contact us by going to, checking out our blog at, you can call us at 18887257828 or you can email us at [email protected] Thanks everybody for attending. Thank you again, Richard and Paul, this now concludes our webcast.