Maximize your IT Data: How to Save Time and Money

July 26, 2017

Jeremy Carter | Manager, Product Management | Ivanti

Melanie Karunaratne | Senior Manager, Product Marketing | Ivanti

Gregg Smith | Solutions Expert | Ivanti

Get a handle on your IT reporting and analytics!

Data, data everywhere and not one useful insight – sound familiar? Our reporting panel will discuss how you can maximize the value of your IT data and get to the insights that impact your business. In this webinar, we’ll cover:

  • Where to start when you don't know where to start
  • How to spot the pitfalls and avoid data traps
  • How to solve business challenges with information you can trust
  • Plus suggestions of what to do with all that new found free time!

Transcript:

Melanie: Hi, everyone. Thanks for joining us on this Ivanti webinar: Maximize Your IT Data, How to Save Time and Money. My name's Melanie Karunaratne from the product marketing team at Ivanti, and with me today are two Ivanti reporting analytics experts, Jeremy Carter from our product management team, and Gregg Smith, one of our solution architects.
 
For this webinar, I'm going to ask Jeremy and Gregg a number of questions we hear out on the road. I'll cover topics like What are the reporting challenges you hear? What are the specific needs of the security, service management, and endpoint admins and managers? And, if we have time at the end, we'll open up the chat facility, take a few questions from everyone, and possibly even spin you around some reporting tools. This webinar is being recorded, so if you miss anything, you can get the recording at ivanti.com/webinars.
 
Without further ado, let's kick off with the first question, which I'm going to ask both Jeremy and Gregg to answer. There's a lot of IT data out there in the various IT tools. How do we make it work for us? How can the audience make their IT data reporting strategic? Where should you start? Gregg, can you start us off with that one?
 
We appear to be having some technical challenges. If you can hold on one moment, we'll be back with you shortly.
 
Gregg: All right, there we go. Can you hear me now?
 
Melanie: We can hear you now.


Strategic IT data reporting
Gregg: All right. Over the past few decades, everything has been getting faster and faster. Everything's evolving, whether it's transportation, communication, we're cooking our food faster with microwaves, technology's evolving, and our processes are evolving. When it comes to IT and reporting, that needs to evolve, as well, to keep up with the increasing pace of things. Technology, software's evolving, operating systems, the vulnerabilities are evolving, the opportunities for issues. Everything's faster and faster, so we need faster access to the data, faster answers from all this information we're collecting.

 

In IT, we have many different systems collecting all sorts of data. Usually, those come with their own separate reporting tools. Having to use separate tools to access information is a slow way of getting answers to your questions. One way to evolve that is to centralize reporting and have one location you can go to to pull information from all those different data sources.
 
The next thing is, traditionally, reports are scheduled. You're looking at historical, old data. To be more proactive in dealing with data that's coming at you faster and faster, you need to transition to real time, so you’re looking at what's going on right now, seeing the most current information, and able to report on the trends and the historical.
 
Another thing is reporting, typically, is a very technical task, and it's usually delegated to a team of people who have that technical expertise, which becomes a bottleneck. When you need information about something that's happening right now, you don't have time to wait for a team to put the report together for you. Having rapid content development, being able to create reports and dashboards, and being able to get the information you need quickly is another evolutionary step in reporting that needs to happen.
 
With that rapid application, it also helps if you're able to simplify the process so you can transition to the next step of self-service. Remove the dependency on the reporting team, and let the people who need the reports and the information get at it themselves, so you can remove that bottleneck.
 
Those are three things we really need to do to evolve reporting: Centralize all the reporting into one location, look at it in a more real-time immediate situation, and make it easy to use as self-service so we're not dependent on the bottleneck of the reporting team.
 
Melanie: Thanks, Gregg. Jeremy, do you have any insights?
 
Jeremy: I would echo what Gregg mentioned. I think the one I'll drill into a bit more is real-time information. Being able to see the data in action also gives you the ability to respond to what's happening. Obviously, watching a dashboard, watching it evolve, watching it change is not something we have time to do day in and day out. One of the key things with data in action is being able to set thresholds, set alerts, be notified when some of those things you need to pay attention to happen, and have the flexibility with the toolset to make responses, make decisions based on those responses, and correct problems before they become a bigger issue.
 
The real-time nature of that gives us a couple of things that help provide an advantage. One is being able to see that happen when it’s a small problem, before it's grown to a big problem, and before it's something you need to go in and reverse engineer to find out what happened in the beginning. Early response, seeing data in action, I feel, is one of the key pieces needed with reporting and something we should have up front.
 
There are a couple of things I've seen. A couple of questions, in my experience, are being asked, and these may resonate with a few people on the line today. One is how protected are we against the most recent security outbreaks? This is a question I know is sometimes asked after something like that happens, but even while you're in that phase prior to something happening, the big question is how protected are we? Are we current on our patches? Do we have the latest versions of the software installed that will help protect our environment? That's a case where I see real-time data, being able to get a current snapshot, see a current view, and even see the progression of how you measure up securitywise will be helpful.

 

Another question I see quite often is how we're progressing with our Windows 10 migration, for example. There's a lot of work going into these migrations, and it’s important to see how they are progressing, what's happening, and being able to correct any problems or errors that come up as they're happening with real-time data. So yeah, I'm going to lean a little on real time for that answer and those couple of examples help show how real-time data can be applied with the reporting solution.
 

The Value of Real-Time Data 
Melanie: Great. Good instruction. Now I want to ask you to apply that to the real world. You touched on the security team. Tell us a little more about what you would do in a real-world scenario.
 
Jeremy: That's a great question. Being more responsive in some of those situations with vulnerabilities is a great one. I know, as we saw a couple of these ransomware attacks happen, one of the things we captured right away was there was a specific vulnerability that exposed machines on the network that could have it. Being able to get a current snapshot, a current view, and watch as those progressed as you applied the patch, applied that to the machines and saw them update, I think vulnerability is a key piece that applies that real-time nature.
 
Another step is a single dashboard―being able to capture this information not only in real time but also in a single consolidated place, as Gregg mentioned. That consolidated dashboard gives you the ability to see information together up front and with everything in one place and to be proactive with that information, see what's changing, and see what impact it's having.
 
I met a customer who had several Web pages that made up their dashboards. They looked great, and they also took a lot of time to build. One of the biggest challenges we had working with that customer was trying to get a summary, a good view across the board, I call it our magic view. That view across the board allows the customer to monitor the risk profile of his client base, find the ones that aren't patched, and be able to analyze that against what the latest ransomware attack is.
 
Also, from the data security perspective, it means we don't necessarily need to give everybody access to those security tools. We can slice up the data so it's only showing what needs to be seen to those who need to see it. Access controls to those dashboards would be a very key piece to add.
 
Melanie: That's great. Security teams should aim to get real-time consolidated data without compromising data security and the tools they have. Okay, we heard about some high-profile security breaches recently, WannaCry, NotPetya. Gregg, could IT dashboards help us prevent future breaches?
 

Importance of Dashboards to Security
Gregg: Sure. Obviously, one way of preventing breaches is patching, identifying vulnerabilities and making sure your systems are patched. From a reporting perspective, the best way to do that is to determine what is and isn't patched, and how long it’s taking to patch. You can analyze your historical progress. How long is it taking us to patch systems? When patches come out, how long do they take to deploy? You can work on improving those processes so patching happens quicker, so when these vulnerabilities come out, you know you'll have a secure environment as quickly as possible.
 
There's the historical side, but there’s also, as Jeremy mentioned, looking at real time. For this specific situation, if you have a particular breach, it's going to be around certain vulnerabilities. You've identified particular applications that may be providing the vulnerability and maybe certain patches that need to be applied. Having a general overview of your patching process, and also being able to focus, when a specific event occurs, on how you can identify the status and where you're at for that specific vulnerability is important.
 
Here we have a sample of a time-to-patch dashboard. It can give you visibility into what kind of risk you're at for the particular breach. This dashboard here is more of a general status across all of our patches, what patches are out there, what machines are missing patches, and the criticality of the patches. This would be a good idea, a good view for your normal, monthly patching, but when that specific event occurs, when that specific breach happens, you're going to want to take this and focus it on the specifics of that breach. You're going to want to focus on specific software that's installed, what machines it’s installed on, specific patches, and the status of those patches. That's not necessarily an existing report you're going to have, so when those specific events happen, you need a reporting environment where you can quickly narrow reports down to the specific elements in play for that breach. The idea would be to take something like this dashboard and filter it down to the specifics of that breach. You need both―a set of dashboards that are high level for executives who only want to know overall how you are doing and lower level dashboards and reports for the people in the trenches, specifically which machines are vulnerable and why they’re not patched. You have ongoing action items and need to get those addressed as quickly as possible.
 
Melanie: Sounds good. Let's move on a little and see if we can give some help to those involved in endpoint management. We hear a lot about Windows 10 these days, Windows 10 migration is starting to get a bit of traction, but we know it's not easy for endpoint management teams to get up and going. Could this be better managed with analytics, Jeremy? Could OS projects and migrations be better managed?


Managing OS Projects and Migrations through Analytics 
Jeremy: I think it absolutely can. Windows 10 migration is no small task for any organization. I can recall a story where I had a hard time, I had a very difficult time trying to get some time, a couple of hours, with a customer to talk about and get feedback around his IT environments and what tools they're using. The reason it was so hard, we found, was he was spending a significant amount of his time preparing reports for the migration they were performing. When we sat down and looked at it, we calculated the number of man days, and it was taking a week to generate those reports. It took a full week to generate the reports from the different systems and report back to his management team. As a result, it left very little time to handle day-to-day things, to have the long-term conversations, the strategic conversations, we were looking for.
 
When monitoring a major project like an OS migration, it shouldn't take that much time to run the reports and get the statistics to find out how it's progressing and what's happening with it. I think you do need some specific reports for something like a Windows 10 migration so you can see how it's progressing. What machines are ready? Have you defined ready? A report that gives you a list of what those are, so you can identify candidates for the next round of refreshes, for example. What needs to be done? How do you remediate? How do you get ready? Are there hardware or software migrations that need to happen before you can begin that migration? Lastly, how is the migration going? Are you on track? Are you progressing as quickly as you expected to through pilot groups or other groups that are picking up the Windows 10 migration?

 

You should be able to visualize that and see that, and as you're looking at it, it becomes less of a running and getting reports from each of the different systems to find out how things are running and how things are progressing, and more of managing by exception. At this point, you can see very quickly there are 69 computers that are not ready for Windows 10. Those 69 computers, regardless of where they are, could be holding up the migration or a particular pilot group’s migration. Watching the workload and monitoring the status of that workload instead of monitoring the different steps it's taking as part of that migration throughout the day will help show that.

 

A summary screen like we're looking at here gives me a very quick glance of what's ready, what's not ready, how things are migrating, and then breaks it down by some of the common questions that are likely to be asked: Show me that breakdown by manufacturer, show me my oldest operating systems, show me by scope, what locations are ahead of the curve, which ones are lagging behind and why. You can get your arms around a migration project like that very quickly by using a visual reference, a visual dashboard that provides information back in real time.
 
Melanie: Thanks, Jeremy. What I'm hearing is the faster, easier way to track the migration progress and workload is using analytics, being able to make decisions about next steps in the program and getting notifications. What's not to like compared to a week of man days for getting a report?
 
Both of you have talked about security, patching, OS migrations. I know sometimes projects like these cause more calls or incidents logged on the service desk. Gregg, would you like anything to say about that?
 

Managing Help Desk Calls with Dashboards
Gregg: You're right. That usually does lead to more calls to the help desk, which I know they appreciate. When that happens, that's a great case for real-time dashboards. You're not going to have existing reports, necessarily, that focus on identifying the increase in tickets. You need to be able to analyze that a little bit more dynamically. With real-time dashboards in the days following these projects, you can get the big picture of what's going on with your incident management and where the calls are coming from, what types of problems they are having, are you seeing an increased number of certain types of tickets beyond the normal load.
 
In my past life in an IT department, I was involved with creating static reports. It's pretty much all we had. There's definitely a need for more real-time access to the data, being able to keep a pulse on what's going on. I have some examples here. This dashboard is an example of a higher-level look at what's going on with the help desk, and the nice thing is this is a combination of looking at, over time, how we’re doing, how the help desk is doing in terms of the number of tickets they're creating, and how quickly they're responding to those tickets. It’s also bringing in―going back to the idea of multiple data sources―on a help desk, there are two main things. You have the tickets and the phones. In this example, we're looking at what kind of ticket volumes and response times the help desk is having on the ticket side, and we’re looking at what's going in with the phones. How many calls do we have backlogged, and how long is it taking for us to get to those calls?
 
Another example: This would be a sample of a dashboard for a specific team. Here we have the database team. They're a little more focused on their tickets and how they're doing. Each team might have a slightly different dashboard, a slightly different focus. In this case, they may be concerned about whether, in any Priority Ones, is a manager on it? Do we have any Priority Ones we need to focus on?
 
Another thing is historical reporting. How did we do on our SLA metrics last week, last month? Let's improve on it, let's get our numbers up. Be a little more proactive about that. Identify when the tickets are going to violate. If you see there are tickets that are about to breach, put some focus on those, and try and knock them out so they don't breach. Use the dashboards to actively work on increasing your SLA metrics. You have tickets coming into the group, but are they sitting there unassigned, or are individual analysts not taking tickets? You may need to go in there and start handing them out.

 

Something like this makes it very easy for a manager to spot what the action items are that need to be addressed right now. The ability to look at the workloads of individuals and the types of tickets they have. In this case, for the DBAs, they may be actively involved with a lot of change orders that are going on. The ability to have a pulse on what change orders you’re going to be actively involved with in the next few days, or the next week or two, and being able to incorporate those in your plan, your scheduling of the work you're going to do in the coming days.
 
The next example is pulling back to a little higher level. This is looking at all the groups and the overall view of the aging tickets. What tickets are open? How long are these groups taking to address the tickets? Are there particular services being affected more than others? Whether you're looking at a higher level or down in the trenches for the individual groups―they're going to have very specific dashboards tailored to the types of workloads they deal with―having a central location and being able to have dashboards for everybody that pull information from the different systems makes a big difference in how effective you are in your day-to-day operations.
 
Melanie: Thanks, Gregg. Really interesting. I want to stay with the service desk and the ITSM team a little longer. We know that ITSM teams sometimes trade out tools. We know, for example, that when an IT organization, or an organization in general, grows or merges with another organization or they mature, they tend to look at tool replacement. When that happens, it means they may be using more than one tool at a time. In fact, this is probably the same for other IT groups outside of ITSM. Jeremy, any thoughts on how we can help there?
 
Jeremy: That's actually a great point, Melanie, and one that can be very disruptive to an organization. Being able to chase down historical information and get statistics on what they're looking for, how it happened in the past, for example, and also being able to report on and see what's happening with the new tool. We've dealt with a number of these where there's a period of transition between ITSM tools being used and that transition time can be very difficult to report on from both perspectives.
 
Some types require quite some time, but at the end of the day, people still need reports. They need to be able to see what's happening, what the ticket churns look like, what's happening with the incidents, the changes, and the request processes, and be able to report on those simultaneously. I have an example here, and what you're looking at on the very top here is a migration, looking at the incident trend from one tool, which you can see taper off, and the new tool brought online and beginning to ramp up there. You’re able to see that information historically and see what happened during the migration time. As you drill into it down below, what we've done is put together a historical chart. On the historical chart, we're looking at both data sources. The old one in this case is the MC incidents, and the other one is Ivanti incidents. You can see very quickly, without that break in the migration, what that reporting looks like as it’s been generated in the past and as they continue to use the new tool in the future.
 
By bringing both sets of data into a common tool, you can report on both sets even if you didn't migrate all of the data. It acts as a bridge between the two toolsets and helps you reduce reporting time, and can continue to be run against the older toolset, even though it may have been taken offline. By running against that database, it can continue to provide you with the statistics and information you may have reported on at one time or would like to continue to see trending reports on going forward.
 
Combined Reporting for Multiple Toolsets

Melanie: Bringing together multiple toolsets, that's kind of interesting. Something you mentioned before was you really shouldn't be in all of your IT tools, so I can see lots of uses for this across shared IT departments or MSPs. Are there other places you've seen this, Gregg or Jeremy?
 
Jeremy: I think there are a number of different locations. We've seen this in verticals within school districts, hospitals, lots of places where similar reporting is needed, but you have to keep the data view separated, they may be using different tools. There are a couple of boxes with separation of data that should be watched. One is being able to provide the data to a specific location. You may have a number of locations, but each location needs to see their own reports and their own statistics around the data. You may want to standardize those reports, as well. Having the ability to go into the data and set up rules, which I talked about briefly earlier, set up rules that allow a specific region or the person in a region who's reporting to see the same reports other regions are seeing but with their data. I think that's a key piece of it. It saves a lot of time when creating multiple reports and helps standardize how you look at or view the data within those reports, so you have the ability to write a single report. We could take a look at the incident data, for example, and apply filters directly to the data so the person in Chicago only sees Chicago and the person in LA only sees incidents related to that location. That’s a quick example, one that we see used quite often based on location, but with those filters. They can get quite intricate, I'm sure, as you filter that data down based on location.
 
Gregg: Over the years, I've encountered several situations where we’ve had this. More recently, we had a school district where they had principals at different schools, and they wanted to provide a dashboard of tickets and activity specific to each school, but they only wanted each principal to see their school's information. They are able to have an environment where they can create a single dashboard where all the principals, when they access it, see the same presentation, but they see it specific to their school.

 

There are environments, on the help desk side again, where a company manages multiple help desks in the same system. You have an IT help desk, HR help desk, and facilities, and they're all in the same system. They all have incidents together on the same table, but they want IT to be able to report on IT data but not the HR tickets. A situation could be as simple as one help desk, but they don't want nonsecurity IT staff to be able to report on security-related tickets because they have sensitive information in there.

 

The reality is, you have multiple people needing access to data, and that data comes from many different systems. Typically, you have different reporting tools, so it makes it hard for the person who needs data. He or she has to go to the different tools to get data from the different systems. From a management standpoint, managing access to all of that is difficult. You have all these different tools. You have to grant access to different people, and be able to consolidate all that into one location where users have access to all the data they need from one centralized place. From the security standpoint, you're able to control access to those data sources through one common security policy.
 
I had a customer that had a report they ran every month where they were pulling information manually from four different data sources. It was taking them upwards of 16 hours a month to do this one report, the man month every year just on one report. We helped by introducing this consolidated environment, so they had one system, one environment, where they reported against all four data sources. They recreated the report they did manually and now had an automated scheduled report that pulled information from the four data sources, and they only spent a couple of hours each month adding in text and things that weren't coming from the data. A huge savings in time by being able to consolidate all that into one environment.
 
Melanie: So Gregg, a report like that, let me ask you, how long would something like that take to produce?
 
Gregg: Well, traditionally, you'd go back to that idea of the reporting team bottleneck. Typically, reporting is a technical process, so you have a team of people who know how and are familiar with the data and the reporting tools, and they put together the reports for everybody. The downside to that is when you have those reporting teams in a large organization, they tend to give priority to the business side of the house, which makes sense, but if you're on the IT side and you need a report, that's a problem because you have a queue of requests. Your request is not only going to the end of the line, it's going to the end of the end of the line behind all the business requests.
 
I've heard horror stories about companies where it's been literally months. I think the worst I heard was an 18-month timeframe from putting in a request to getting a report, especially with the way things are evolving so quickly. Eighteen months to get a report on a specific breach―you're way, way too late to the game. You really need to move to the self-service aspect. Make it easy to get to the data and empower those who need the information to do it themselves so that it's a much quicker process.
 
I had a hospital a couple of years ago that we were introducing to this centralized self-service environment. They had me come onsite to do some training. I spent about four hours in a room full of people. We were building dashboards and documents and learning how to create the content and get the answers. At the end, the person who coordinated the training came up and made the comment that they probably should've invited the reporting team. The entire room was managers and executives. They were embracing the idea of self-service and empowering the people who need the data to get the data themselves. It was an afterthought to think, "Maybe the reporting team should know how to use the tool, as well."
 
Melanie: That's funny. That's a great tip. Self-service, self-sufficiency, great practical insights from both of you. You’ve both had lots of experience working in organizations. You mentioned a couple of your experiences just now, but people on the phone are starting out on the reporting journey. What one piece of advice would you each give them? Jeremy, you start off.
 

Importance of Understanding Metrics
Jeremy: I think one of the most important things is to understand the metrics. You probably have a number of things you're watching or reporting on today, but it's really understanding those metrics and questioning some of them, as well, and understanding what you need to question. Knowing what you're looking for and why are the two key things I would highlight. You want a system that's going to be able to grow with you, too, that gives you the dynamic capabilities, the self-service capabilities we've been talking about, to change with the data reporting needs you have as they change, as your organization changes and matures.
 
I think getting access to and pulling information from the system is another key piece, being able to set thresholds or reports that can be generated and shared with the organization, shared with the right people in the organization. To help streamline where the data needs to go, the first question to ask is why. Why do we need this metric? Dig into what value its bringing to the business and then growing from there.
 
Melanie: Okay. Why? Make sure you can scale. Make sure you can share. Make sure you can have thresholds and notifications. Gregg, anything to add, or has Jeremy stolen your thunder?
 
Gregg: Yeah. This is kind of a departure from what we've been talking about so far, but check the validity of your data. Check your data sources. I've been working with customers for years helping them improve their reporting processes, and many times, when we start digging into their data and looking at their data in ways they haven't previously, we see things that don't add up. It’s like the old adage of garbage in, garbage out. Many times, when it comes down to it, their data is messed up. Many times, we end up creating dashboards as ways to evaluate the integrity of the data. In addition to the traditional service management reports, ways to check on, say, how fast we’re resolving tickets, meeting our SLAs, etc., and also creating some help checks. If you're pulling in data from active directory, say, to populate information about your employees and a lot of your reporting is dependent on “by location,” have some help-check dashboards to see if you have any contact records that don't have location filled out. Are people accurately filling in all of the details around tickets? If not, increase the training and improve the quality of the data that's in the database because, at the end of the day, the reports you put together are only as valid as the data they're reporting against.
 
Melanie: Great. Thanks, Gregg. Good insights again. Check your data sources, garbage in, garbage out, we all know that phrase.

 

For those of you who are wondering, or didn't realize, we've been showing screenshots from one of our own reporting tools, something called Xtraction, which is our reporting and analytics tool. This is something Jeremy and Gregg are both specialists in. Gregg, you're on the road a lot or on WebEx talking to clients about this. How long does it take customers to get up and running with Xtraction?
 

Speed and Ease of Using Xtraction
Gregg: Well, that's the nice thing. When you deploy most products, it takes days, weeks, months, lots of planning, and lots of work getting things set up. The nice thing about Xtraction is it’s not only simple to use, but it's also simple to set up. It's a small footprint with low overhead. Typically, I have a one-hour WebEx session. I don't even need to be on site. I'll do a one-hour WebEx, and we get it installed and configured. I usually go through some basic administrative training so customers know how to manage user setup in the system, and then actually build a few dashboards. In a one-hour timeframe, we can get it installed, configured, do some training, and actually build content. It's literally a plug-and-play system.
 
Melanie: Wow. That sounds good to me. Okay, we're almost at the end of our questions, so what I'm going to do is check to see if there are any questions from our listening audience. Jeremy and Gregg, I don't know if you're getting any questions, but if you are, please read them out, and I will check to see what's coming in.
 
Okay. Someone's asking about coding. How much coding does Xtraction take to connect to my data sources, whatever they are?
 
Jeremy: I can take that one.
 
Gregg: Yes, please.
 
Jeremy: The answer is it depends. The nice thing about Xtraction is we've premapped a large number of commercial products from many different vendors. It's a vendor-agnostic tool. While we have it covering all the Ivanti products, the reality is it reports against similar products across many different vendors. I mentioned plug and play, it's literally plug and play. That's why, in an hour, we get it installed, and you're reporting against your data sources literally the second it's up and running and connected.
 
Depending on the tool, depending on the product you're reporting against, there may not be any coding or any dealing with SQL or changes. Some systems, like service management products, customers tend to customize, so you'll add custom fields, custom data points, in that product. Typically, you will want Xtraction to be able to report on those, as well, so you need to tell Xtraction about those custom fields.

 

Although it's a little more technical than building the dashboards in the front end, it's really not difficult. A more technical person, we'll say someone who knows SQL, someone who's a DVA, a report writer, a developer, or someone who's been tinkering with SQL and understands how to write queries―they don't have to write queries, but understanding the concepts makes it easier to do the modeling on the backend. There's no coding per se. The most complex thing you'll do is write some SQL expressions.
 
Melanie: Great. Thanks. Okay. Any more questions coming through? Where can I find out about the connectors you support? I think I can take that one, because that's on our website. If you go to ivanti.com/xtraction, you can find them there.
 
Okay. We have a few minutes left. We have about 15 minutes left. Gregg, do you have anything you can demo to the people on the call, quickly? Sorry, I’ve put you on the spot.
 
Gregg: Sure. Actually, I think you need to make me presenter.
 
Melanie: I've just done that. The power of WebEx.
 
Gregg: Oh, there we go. Let me see here. All right, can you see the screen?
 
Melanie: Yep, looking good.
 

Xtraction Demo
Gregg: Perfect. All right. This is Xtraction. I have a dashboard loaded up looking at some trending information on service management tickets. It is Web based, and we have a lot of customers using it as a wallboard. You can set up multiple dashboards and cycle through them. You can set a refresh rate and then cycle through the dashboards. You can go full screen, put this up on a wall, and it will cycle through continuously from dashboard to dashboard. As far as creating these, the dashboards are very interactive. Obviously, if it’s up on on a wallboard, there's not going to be a whole lot of interaction, but if you're viewing dashboards on your screen, being able to drill into the information and interact with the dashboard is something that works really well with Xtraction.

 

Any tool can have nice, pretty graphs and charts. The question is how easy is it to create that? Do you have to engage services or someone technical to build this for you, writing SQL queries behind the scenes, etc.? The nice thing with Xtraction is that it is self-service. It's point and click. It's extremely easy to use and this dashboard can be built literally in minutes. You're not writing any SQL, you're not doing any coding. To give you an idea, in the environment, if you're a designer, you're a click away from being able to build a dashboard. The first thing you would do is pick your layout. If you want a full-screen graph or a list, we have a variety of formats, anything from full screen to what I call a bingo card. I'll stick with the basic one over two.
 
The other thing we talked about was being able to connect to multiple data sources. On this demo system, we're connected to many different products from many different companies. We have all the Ivanti products, but we also have things from CA and BMC, Microsoft, HP, the phone systems, Cisco, Avia, etc., Solar Winds, SCCM. We have connectors to active directories if you want to report against active directory. Whether it's project, service management, asset management, systems monitoring, project management, if it has a database behind it, Xtraction can report against it. It's a matter of, do we have the existing adapter yet for that particular product. If not, we create it.
 
The idea is, you can point your Xtraction system to your reporting system to multiple sources of data, and then you just start off and pick what information you want to start reporting on right now. I'll say, "Okay, I'm going to come in and look at our incidents." Next, you usually don't want to report on all of that data, so you might say, “I'm not interested in all the incidents in the system, I want to focus on a subset of that.” It may be tickets that were open during a particular period of time, say opened last month or opened in the past 12 months. Maybe you want to look at change orders that have a scheduled start date in the next 14 days. Maybe you don't care about date, you want to see all incidents that are active and not resolved. You want a report on the active tickets. That's where you need to go in and deal with your filters. Now if you've written SQL queries and done reporting, this can be quite complex. Do I do “ands,” do I do “ors,” you have parentheses to control the order in which things are evaluated. Even people who know what they're doing can add an extra parentheses or forget to close one, and it causes errors and mistakes. We've done a lot with the product to simplify all of that so the most nontechnical person doesn't have to get involved with those technical complexities.
 
In this case, if I wanted to change the date range, I could come in here and edit this condition to look at the tickets. Instead of the ones that were opened last month, maybe I want to see the ones that were resolved over the past 13 months. The date values that are available will vary from the data set. If you're looking at assets, you're going to have things like warranty expiration date, last time it was scanned, last time it was patched, etc. If I went in and kept looking at the open date last month, maybe I don't want to look at all the incident tickets. Maybe I'm a group manager of a couple of groups, so I want to filter only on my group's tickets. I can add a filter that says, "I'm looking for where the group equals , , ,” and I see groups have the word “desk” in the name, so I'll search. My two are the LTT service desk and the service desk. It's that easy. I'm not doing “ands” and “ors,” I'm not doing parentheses. You simply add in your filter conditions. If I only want to look at Priority One and Two tickets, I add those conditions in here. Then it's a matter of now that I've narrowed down what data I want to report on, how do I want to see that information. Maybe I want to break it down over the months the tickets were opened, so I can do a timeline.
 
You notice when I drag a component over, I get immediate feedback. This isn't a wireframe report, I'm not designing the structure, saying “go,” and having it run. When I drag something over, I get immediate feedback, and this isn't dummy data. I ran a query against the database and brought the results back. I can see immediately whether this is―maybe I was expecting it to look a little different, maybe my query or my filters aren't right. I can go in and adjust them quickly, reapply it, and see if I get what I'm expecting. I'm already looking at my two groups, so maybe I want to see a break down by assignee. Here are the different assignees and how many tickets were assigned to each. We'll say, "How long did it take them to resolve?" and we'll get that. I'm now ready to save this. I have a finished, complete dashboard here in the designer.
 
I mentioned the dashboards are interactive. That's automatic. The interactivity in the dashboard is automatic, and it's available even here in the designer. You can see, looking at last month since this is a break down over one month, how many for each day. We have the total for the month. Our daily average was 208. Our lowest day was one. We had a max of 376. It does some automatic calculations, but it's also interactive. I can look at it and say, "Here was that peak of 376. Let me view those records." Here are the 376 incidents from that day.
 
You can scroll through, you can click on columns to sort on the columns. The nice thing is, when you're looking at a record list like this, you can double click on a record and it will launch you into the native tool. It'll take you into service desk, it'll take you into your asset management system, straight to the record. If you're looking at incidents, it takes you straight to the incidents. If you're looking at knowledge articles or reporting on knowledge articles, it'll take you straight to the knowledge article. You can even customize it so it has multiple URLs. In addition to taking me into the service desk ticket, I could set it up so that if I right click, I can choose Google and it will launch or go to Google and do a search against the summary. You can customize multiple URLs associated with the record list.

 

If I want to export this, maybe there are a couple of extra fields I'd like to see before I do the export. On the fly, I can modify what data I'm seeing. Maybe I want to see the affected service and when it was last modified. Now that I have those additional pieces of information, I want to do an export, so I quickly come in and export. We support exports to a variety of formats, and when you're exporting a report, one of the nice things is you can export to PowerPoint. One of the examples I like to give is if you're doing a monthly IT report, you usually have a meeting to go over the contents of that report. When you generate the report, say as a Word document or a PDF, if you also generate it as a PowerPoint slide deck, you’ll have a slide deck to go with the report for the meeting.

 

For a list like this, you typically export to Excel. One of the nice things you can do when you're exporting these lists is show a URL record link. Now, when I do the export and open it up, I have my record list including the dates, custom fields I added in, and I also have a link in my export. This works whether it's PDF, Word, HTML, or Excel. I have active links to take me back, in this case, into the service desk straight to the ticket. People who like to work in Excel love Xtraction because it makes it easy to get data out of the systems and into Excel where they can then do what they want to in Excel.
 
These are also interactive from the standpoint that I can click on a particular day, and instead of doing the records, I can filter. Now the rest of the dashboard is filtered down to only the records from that particular data point. If I want to look at two days, or let's say these are the three days after we did a particular patch. I can click and filter on the first day, add the second day, and add the third day. Now we're looking at those three days combined, who they were assigned to, how long it took to resolve, etc. For any of these data points, after you've applied the filter, you can click and view the records and go on from there.
 
Melanie: All right Gregg, that's fantastic. Thanks for showing us a little about Xtraction. For those of you who are interested in seeing more about Xtraction, or giving it a go yourself, you can do so by visiting our website, where we offer a free trial of Xtraction. If you go to ivanti.com/xtraction and click on the free trial button, you should be able to request a free trial from us there.
 
I think we are just about out of time. I'd like to say thanks to Jeremy and Gregg for joining us on our journey through reporting and analytics and dashboards. We'll catch everyone again next time. Thanks, all. We’ll speak soon. Bye.
 
Jeremy: Thank you.
 
Gregg: Thank you.