A Quick Look at ITAM and the Cloud
And that can sometimes feel like it applies to cloud asset management, too! While there are many analogies one can draw between space and the cloud, the idea of a vast, seemingly limitless, expanding area where new, ever stranger, things are discovered all the time seems quite apt.
Public cloud has been around longer than you might think. Amazon AWS launched in 2006, the same year that Ghostface Killah’s Fishscale album came out and Anne Hathaway’s The Devil Wears Prada hit the cinemas.
In 2010, Microsoft’s Windows Azure (as it was then known) arrived, alongside Ice Cube’s We Are The West album, and the classic comedy Hot Tub Time Machine starring Craig Robinson.
Despite this, the management of the various resources—and efforts to keep the costs under control—are still quite new in many regards. Talking to ITAM professionals all around the globe, license compliance and cloud cost management certainly seems to be appearing on more ITAM to-do lists in 2019 than ever before.
I believe there are three types of management in the cloud:
It is the first two—cost management and license management—that ITAM professionals should pay close attention to. While access management (who can access your cloud environments?) is very important, this is more a security/infrastructure concern.
License Management in the Cloud
Whilst this is a huge focus on-premises, being compliant with vendor licensing rules is often put to one side once the cloud gets involved. However, a virtual machine in the public cloud does not equal a virtual machine on-premises; while the technology is similar, the licensing rules can be wildly different.
Let’s look at a few examples.
Running Microsoft software in the public cloud requires license mobility rights, which are typically acquired via Software Assurance. If you have an on-premises license, let’s say SharePoint 2013, without Software Assurance, you can run that in your own datacenter for as long as you’d like. However, if someone decides to move that server into the cloud, you'll be non-compliant.
Oracle only recognizes two third-party cloud environments as authorized: Amazon AWS and Microsoft Azure. If you’re in a multi-cloud model where your organization uses multiple public cloud providers, you’ll need to have visibility of what resources will sit in which clouds and also any potential movement between clouds. Even in the authorized cloud environments, there are different rules to navigate when it comes to numbers of vCPUs for different editions in the cloud and, for those of you in an Oracle ULA, using licenses in the cloud can have a real impact.
IBM have defined PVU metrics for use of their software in the public cloud, which are standard across the major cloud providers… except for Oracle. Should you wish to run IBM software in the Oracle cloud, the metrics will be double that of Amazon, Microsoft, Google etc.
The fact that workloads can move into—and between—clouds relatively easily can make maintaining license compliance a tricky endeavor. To proactively track license compliance in the cloud, ITAM would ideally have visibility of all plans to migrate on-premises software workloads into the cloud and have the opportunity to assess potential license complications before everything is set in stone.
It’s worth looking at your contracts and the vendor’s audit rights, too. Are they limited solely to your on-premises environment, or can they audit your cloud environment/s, too? What does that mean in terms of security, data protection, risk etc., if you need to start running third-party scripts across your cloud platforms?
Cost Management in the Cloud
Cloud is costing you money, every second, even while you’ve been reading this sentence. The things that make cloud so attractive—easy access, pay-as-you-go billing, a cornucopia of new tools—are also the things that make it so key to get cost management right as soon as possible. There are several ways you can get your cloud spend under control, but I thought we’d look at two of them here to help get you started.
Turning Things Off
This might sound obvious but it’s almost antithetical to the cloud ethos. The cloud is always on, always available, always working, and so many services are priced on a 24x7 basis. However, while the cloud may be always on, the people using large parts of it aren’t. That means while certain elements of your infrastructure need to be always available (like your website!), a lot of them probably don’t.
Test & Dev is a good example. If people aren’t testing or developing every hour of every day, do all those cloud virtual machines need to be on? Probably not. When looking at 24x7 resources, cloud vendors often use 730 as the average number of hours in a month. If, instead of 24x7, you implemented a 12x5 policy, those same resources would only be turned on for around a maximum of 300 hours per month—a 59% decrease in spend.
This is the idea that things are often over-provisioned in the cloud and so shrinking them down to the right size can help reduce costs. One of the great things about the cloud is that it’s as easy to deploy a virtual machine with 72 cores as it is to deploy one with two cores, or one with 448GB of RAM as it is one with 1GB of RAM… that’s also one of the worst things about the cloud!
While there is quite possibly big money to be saved through rightsizing, it can be hard to do simply because it requires much more knowledge and involvement with the hardware and architecture side of IT. Prevention is always better than cure and one way to go about this is creating an internal catalog of approved cloud resource sizes for different use cases i.e. “Web Server = X”, “File Server = Y” etc. and having your cloud architects work within those parameters.
However, as with so many processes it can be difficult to enforce them. Although we wear pink on Wednesdays, there’s nothing to stop someone from wearing green. And this can be particularly true in the cloud, with such a high number of stakeholders and users. This is where the third element of the diagram, access management, can be key—helping make sure only those authorized, and trained on the processes, can create resources, etc.
Both Microsoft and Amazon offer built-in tools that suggest resources that can be right-sized, by looking at the utilization of the virtual machines—e.g. if it’s not going above 60% usage, it’s safe to say you could make it smaller and reduce your costs. Although it may only be a few pence per hour difference, multiply that across your estate and by the number of hours those resources are running for, and it soon starts to make a difference to that larger than expected cloud bill.
For those of you who are still early in your journey to cloud, here’s a bonus tip: Try and do the rightsizing on-premises, before anything even gets to the cloud. Work with the infrastructure teams to understand the on-premises utilization, then with the architects to ensure their designs aren’t too grand, and you may be able to stop your cloud bill ever reaching outlandish proportions. Quite how you calculate the amount of “cost avoidance” is another matter though!
I could keep you here for days delving deeper into the above methods as well as looking at all the other ways of reducing cloud costs, such as:
- Reserved Instances
- Spot Instances
- Microsoft Hybrid Use Rights
- Resource Location
But it’s Christmas soon and we’ve all got places to go! In fact, I’m off to watch a live performance of Jingle Bell Rock…