Cloud spending continues to rise as enterprises increasingly look for ways to optimize their IT infrastructure. According to Gartner Group, more than $1 trillion in IT spending will, directly or indirectly, be affected by the shift to cloud during the next five years.

It’s no surprise that, given the many benefits of shifting to a cloud-based or hybrid cloud model. One of the most touted benefits of moving to the cloud is the cost savings to be had by only using what you need, when you need it. In fact, a recent survey from RightScale found that 53 percent of cloud users cite cost savings as a focus for 2017.

However, despite all the hype around cost savings in the cloud, many enterprises are over provisioned and paying for resources that they don’t use or need. According to RightScale, “on average, the IT pros surveyed said they their organization wastes 30% of its cloud spend. In addition, 39 percent of instance spend is on virtual machines (VMs) that are running at under 40 percent of CPU and memory utilization, with the majority of those running under 20 percent utilization.” This chronic under utilization of the cloud infrastructure is a huge waste of money.

Enterprises often buy more capacity than they need to ensure that they have enough resources to handle their current and future growth (legacy data center thinking). They are also often unaware of what applications are being the most and least utilized which can result in a large amount of unused cloud resources that are constantly running and costing them money.

This is also true as enterprises are increasingly using containers. Containers are meant to be temporary and scope-limited, meaning that they should spin up and spin down as needed. However, the underlying infrastructure that containers run on, is often left to run constantly and therefore destroy the value of the pay as you use cloud business model. This only exacerbates the problem of not fully embracing the utility-based cloud pricing approach.

To avoid this waste, enterprises must be able to start and stop their instances to better utilize computing resources. They also need visibility into their networks and to continuously monitor their cloud spend and utilization to get the most out of their investment.

Cloud utilization continues to be a challenge for enterprises. In fact, some large enterprises are struggling to get 10 percent utilization from their cloud infrastructure. As the cloud matures, cloud business models need to evolve as well. The hope and promise of using only what you need when you need it is not yet a reality for most enterprises and optimizing existing cloud usage needs to be a top priority for all cloud users.

Cloud automation and monitoring tools can help control these extra costs and maximize cloud resource utilization. There are “Bots” that can automatically identify instances that have either been running for a long time or have very low capacity utilization.  Bots can schedule downtime for instances when not in use.  For example, dev/test/qa environments that are not utilized at night.  Or large capacity instances used a few days a month for financial closings, or regular risk assessments.

Enterprises can set up more complex rules for these cloud automation tools to follow. For example, a resize Bot can create a list of all instances less than 5 percent utilized over last 30 days and then resize them to the next smaller level so the enterprise pays for the smaller size, often a 50% savings per instance. The bots will keep doing this until they reach the lowest level available resulting in significant cost savings.

So, does the cloud business model really stand up to its pay-as-you-go claims? Only if you carefully monitor your cloud resources and ensure that under-utilized applications are not running when the don’t need to be.