Press Releases

Cloud Expo NYC

DivvyCloud is exhibiting its technology at Cloud Expo NYC in Booth #237 June 6-8. Jeremy Snyder, VP of Business Development at DivvyCloud, is speaking for a general session and the Tech Talks during the event. Jeremy will share best practices, industry insights and...

Don’t lose track of your instances, tag them.

Don’t lose track of your instances, tag them.

Bot of the Week: Tagging Audit Bot   This bot inspects cloud resources to validate that they are tagged with appropriate key/value pairs (e.g., “Environment: Production”). The policy can be applied as a global policy, or fine-tuned to accommodate for different...

Instances with Failed Status Checks

Instances with Failed Status Checks

What it does: matches instances that fail the system/reachability status checks This bot identifies compute instances which fail instance/system reachability. When failure occurs this means that the system is not accessible over the Internet and likely is running...

DivyCON2

DivvyCloud is excited to announce our second customer event! DivvyCON2 Dates: Spetember 2017 Location: Rosslyn, VA

SCHEDULED INSTANCES BOT

SCHEDULED INSTANCES BOT

facebooktwitterlinkedinyoutube What it does: Schedules automatic starts/stops for cloud resources to save money The Scheduled Instance Bot provides the ability to configure daily stop/start schedules for compute infrastructure. It’s goal is to drive down monthly...

CHECK EXPIRED SSL CERTIFICATES

CHECK EXPIRED SSL CERTIFICATES

What it does: Identifies expired / soon to expire SSL certificates This bot identifies SSL certificates which are set to expire and/or have already expired. When this occurs it can present end-users accessing web-based applications with browser warnings and errors...

AWS re:invent 2016: What’s new in the cloud?

AWS re:invent 2016: What’s new in the cloud?

Amazon Web Services held its fifth annual AWS re:invent conference in Las Vegas. This year’s event was not only packed with attendees and vendors, but it was also chockfull of news and announcements. The public cloud provider released dozens of new features and...

Bot of the Week: STALE SNAPSHOT AUDIT

Bot of the Week: STALE SNAPSHOT AUDIT

What it does: Identify snapshots which are older than a defined threshold of daysThis bot looks for snapshots of compute volumes such as AWS’s Elastic Block Storage service which are older than a defined set of days such as 30. The value can be changed to accommodate...

DivvyCloud for DevOps: A Match Made in Heaven

DivvyCloud for DevOps: A Match Made in Heaven

What is DevOps? Dev/Ops, also known as development and operations, or the consolidation of development activities with ongoing operations, is the idea that development teams and operations teams should work together closely on launching and running applications. What...

Bot of the Week:  SECURITY RULE AUDIT BOT

Bot of the Week: SECURITY RULE AUDIT BOT

What it Does: Identify and Close Ports Open to Unauthorized Networks This bot inspects all configured resource access lists such as AWS Security Groups, Azure Network Security Groups and Google Compute Engine Firewalls for the presence of ports/protocols which are...

DivvyCloud joins Google Cloud Platform Partner Program

“With a growing shift towards containerization among our enterprise customers, we felt it was time for us to build a closer relationship with Google ­ the clear leader in many aspects of container strategies. Also, we are impressed with the renewed focus we’ve seen from Google with the goal of enabling the movement of enterprise workloads to GCP,” said Peter Scott, Chief Marketing Officer at DivvyCloud

Auto-scaling: A Myth or a Possibility

Auto-scaling: A Myth or a Possibility

In a recent article on CloudTech.com, Monica Brink tries to shatter the illusion that auto-scaling, or the automatic resizing of application resources depending on demand, is an unachievable feat. She is right. That is, if the leadership of an organization has the...

Cloud provider partner programs spell profits

Cloud provider partner programs spell profits

Partners are often the key to a cloud provider’s success. There are many reasons for this, but the most significant is the fact that cloud providers may be selling infrastructure as a service, but they’re not SERVICE PROVIDERS in the true sense of the word. Cloud...

DivvyCON1, May 12th 

Please join us for this half-day session where we will share customer experiences, DivvyCloud’s future roadmap, and hear insights and feedback from all our guests during open discussion.

The Power of Bots: Coming to a Cloud Near You

The Power of Bots: Coming to a Cloud Near You

Bots are coming to the cloud. You may have heard of bots -- software applications that run automated tasks over the Internet and basically automate human processes with lightning speed. Bots have historically been used for more malicious purposes such as coordinated...

The Complexities of Managing AWS at Scale

The Complexities of Managing AWS at Scale

Public clouds such as Amazon Web Services (AWS) offer the ease and flexibility of increasing and eliminating resource capacity as needed. This is a vast improvement from on-premises data centers that are usually either under- or over-capacity, costing enterprises...

DC Modern Enterprise Tech MeetUp

The DivvyCloud team will be attending the DC Modern Enterprise Tech MeetUp on April 28, at 5:30pm. The DC Tech MeetUp works to foster innovation in enterprise tech, build a community of learning, and promote the best and brightest within the space.

Collision Conference

The DivvyCloud team will be attending the Collision Conference in New Orleans from April 26-28. Collision is “America’s fastest growing tech conference” created by the team behind Web Summit

AWS Summit (Chicago)

The DivvyCloud team will be attending the AWS Summit in Chicago from April 18-19. This event is designed to educate new customers about the AWS platform, and offer existing customers information on architecture best practices and new services.

Splunk, DC

DivvyCloud will be attending SplunkLive! DC on April 12th. Splunk software is used to deepen business and customer understanding, mitigate cybersecurity risk, prevent fraud, improve service performance and reduce cost.

VIDEO: Demonstration of Kubernetes in action

VIDEO: Demonstration of Kubernetes in action

In our last article on Cloud Sprawl, we featured video from the most recent Washington, D.C. and Arlington Infrastructure as Code Meetup. This Meetup, which occurred in early March, featured Ruben “Ben” Cessa Flores of the Inter-American Development Bank presenting to...

How to Combat Security Risks in the Cloud

How to Combat Security Risks in the Cloud

In his article, How Companies Can Minimize Their Cloud Security Risk for the Wall Street Journal, Robert Plant (@drrobertplant) addressed many hazards that can arise when enterprises move applications and data to the cloud. He states that while the shift to the cloud...

Dropbox drops AWS to embrace hybrid cloud

Dropbox drops AWS to embrace hybrid cloud

In a recent post on Cloud Sprawl, I looked at the “battle” between private cloud and public cloud, and discussed why those alleging that private cloud was down and out may be getting ahead of themselves. Ultimately, despite the incredible revenue numbers for public...

RSA 2016: Observations from the Show Floor

RSA 2016: Observations from the Show Floor

As the biggest IT security show in the industry celebrated its 25th year, RSA 2016 attracted a record number of more than 40,000 attendees.  And while this is very impressive, a majority of the companies exhibiting at RSA offer the exact same thing. In essence, they...

Security Webinar

Join retired CIA CISO Bob Bigman and Brian Johnson of DivvyCloud as they discuss security and other enterprise considerations in moving to the Cloud.

Events

BrightTalk Webinar

DivvyCloud is speaking at this event.

Dates: Oct 18, 2017, 2PM EST
Location: Virtual, www.brighttalk.com

AWS Meetup Montreal

DivvyCloud is speaking at this event.

Dates: October 12, 2017
Location: Montreal, Quebec, Canada

AWS Meetup Toronto

DivvyCloud is speaking at this event.

Dates: September 28, 2017
Location: Toronto, ON, Canada

VMWorld 2017

DivvyCloud is sponsoring this event.

 

Booth: #700-G
Dates: August 27-31, 2017
Location: Mandalay Bay, Las Vegas, NV

AWS Summit NYC

DivvyCloud sponsored this event.

Booth: #541
Dates: August 14
Location: Jacob K. Javits Center, New York, NY

Black Hat

DivvyCloud sponsored this event.

Dates: July 26-27, 2017
Location: Mandalay Bay, Las Vegas, NV

AWS Seattle Meetup

DivvyCloud spoke at this event.

Dates: July 20, 2017
Location: SURF Incubator, Seattle, WA

The Time Has Come: Multi-Cloud is a Smart Approach for IT Modernization

In this recent Cloud Technology Partners podcast, The Next 5 Years Will Have More Digital Innovation Than the Last 50, cloud computing expert David Linthicum talks to Mike Bainbridge, former Chief Technologist of Rackspace about the role of the cloud in IT modernization.

As one of the cloud industry’s thought leaders, David Linthicum is well-known for his pioneering work in cloud computing. In their podcast, both David and Mike offer terrific insights around how enterprises are transforming themselves via the cloud.  Well worth the listen.

Towards the end of the discussion they touch on the topic of multi-cloud. This is one area where I disagree with their advice for enterprises. Multi-cloud refers to using more than one cloud vendor such as ASW, Azure or Google for your applications and workloads. Some companies opt for this approach to ensure that they don’t get locked into one particular platform.

David and Mike advocate enterprises choose a single provider for all their cloud needs. They talk about the risk of vendor lock-in being overblown, and advocate for strategic commitment to a single cloud platform with benefits like gaining in-depth knowledge and expertise, laying down standard deployment and management processes, etc.  My interpretation of their advice was…just make a decision Mr. CIO, stick with it, and force your organization to use the cloud of choice. 

I didn’t like this approach.  The idea that IT command and control rules the roost is no longer valid in today’s business and technology world. Enterprises must embrace the cultural changes of DevOps, self-service and the democratization of technology that is happening all around us.  Today’s best CIOs don’t look for innovation and technical solutions in a Gartner report or executive forum, and then lay down marching orders for the enterprise.  They tap into their technical teams and ask “What do you like?”  “What tech are you using in your personal projects?” “Show me proof of concept for something that might solve this problem.”

In an ideal world enterprises could simplify on a single cloud platform, but I don’t see that as reasonable option with independent business units/product teams and different innovative services being offered by the different cloud providers. In addition, enterprises need to consider the strategic impact of vendor lock-in in terms of cost, security and innovation. For example, will GCP’s strong focus on containerization change the fundamentals of IaaS economics and leap-frog AWS in three years?

A multi-cloud approach can offer not only many benefits including the flexibility to integrate best of breed technologies and services, but also enable basic vendor management of pitting one cloud provider against the others to drive value. And, multi-cloud environments CAN BE MANAGED with cloud agnostic automation technologies like DivvyCloud.  With pervasive data harvesting to identify problems or opportunities, and Bot automation to remediate issues without the need for human involvement many of the biggest risks and inefficiencies of multi-cloud management can be addressed.

Today’s cloud services are easily accessible and today’s employees are increasingly technologically sophisticated. The better path is to allow product and technology teams to use the clouds that best fit their needs and stay out of the way. Don’t try to force all your technologists, engineers, and developers through an old-school CMP, broker services, or service catalog approach.  It won’t work. 

They will go around you, setting up applications and services that will ultimately grow your shadow IT.  Instead, it’s better to have enterprise accounts across different cloud providers knowing your baseline cost, security and governance policies will run in the background and be ready to take action when someone wanders outside the lines.  Maybe in multi-cloud you can have your cake and eat it too!

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Delete Exposed Snapshots Before Sensitive Data Loss…and Media Headlines Ensue!

Bot of the Week: Exposed Public Snapshots

[To learn more about this Bot, read this blog post by Thomas Martin]

Automatically delete any snapshot exposed to the public as soon as it’s detected. Any time a new snapshot is identified or an existing snapshot is modified, it will be inspected to identify if it is marked for public access and take appropriate automated action to remediate.

Why do I care:

Imagine for a moment that your company has an EBS volume with customer credit card data, or Personally Identifiable Information (PII) about your customers and/or employees. An administrator wants to share this snapshot with another account for backup purposes, but instead of adding the secondary AWS account for sharing, the admin marks the snapshot as public. This is a gold mine for malicious actors, and could be a catastrophic and embarrassingly legal/PR disaster for the company.

In a recent article by The Next Web it was identified that droves of AWS users are carelessly leaking sensitive data via this feature.  In a response to the article, AWS quickly released a new Trusted Advisor check, which when enabled will alert administrators of the account of the issue. It’s a good move by AWS, and honestly is something which likely should have been made available to the public long ago and it doesn’t take action to fix the problem in real time.

Background on Cloud Storage:

Storage has always been a challenge. AWS pioneered scalable storage solutions in the cloud, both for object storage (S3) and block storage (EBS). EBS volumes are network attached volumes which can be attached to an instance. These allow data to be persisted through instance lifecycle controls including stop, start and resizing. They come in a variety of flavors including general purpose SSD, magnetic and even provisioned IOPS for those I/O intensive workloads. They can even be encrypted using Amazon’s Key Management Service, providing improved security and data protection.

AWS provides the ability to create snapshots (backups) of data on these EBS volumes, and persists them into S3 at a fraction of the cost (~90% cheaper than the volume). This data can be retrieved at anytime, is only charged for the incremental difference between snapshots and can be even be shared with other AWS accounts and/or the public. This last piece is extremely important and should not be overlooked. With just a few clicks in the AWS console you can mark your snapshot as Public which in essence allows any AWS customer around the World to make a copy of the snapshot and begin using it.

There are very few circumstances where an organization would want to make their data available to the public. One of the only legitimate scenarios where this features is used is with AMIs (Amazon Machine Images). AMIs enable the quick provisioning and deployment of an Operating System to EC2  instances. For companies such as RedHat, Microsoft and Canonical who routinely create offerings of their OS for the public to use, it makes sense to have these OS snapshots available to every AWS customer.

How DivvyCloud Bots Address Public Snapshots and Other Compliance Issues:

DivvyCloud has responded to this security risk by providing automatic checks via our BotFactory automation platform. A new automation Bot was put in place to routinely check for this security gaff, and it does it globally across your entire cloud footprint. Unlike AWS Trusted Advisor which can be painful to track across multiple cloud accounts, the DivvyCloud check gives you a single pane of glass view across all connected public/private cloud accounts, and surfaces the issue immediately upon login. The image below illustrates what a user would see upon logging into the tool.  This list shows all the compliance and security issues you want to track including Exposed Public Snapshots (5th one down).

BotFactory goes a step further though, and additional actions beyond simple visibility can be configured to eradicate these security issues, and more importantly keep them from occurring moving forward. As you can see in the example Bot’s configuration below, the Exposed Public Snapshot will automatically delete the bad snapshot as soon as it’s detected (hours = 0). Any time a new snapshot is identified or an existing snapshot is modified, it will be inspected to identify if it is marked for public.

Fine Control and Flexibility in Defining Policy and Automated Actions:

As with all Bot’s, this policy can be fine tuned. Additional actions/exclusions can be put in place if there’s a valid reason why a handful of public snapshots must exist across the organization’s cloud footprint. Another example of an action would be to not delete the snapshot, but to automatically revert the permissions to a private snapshot and send notification to the IT team for further analysis/triage.

Effective and autonomous management of AWS S3 snapshots is possible, and ensuring that organizational cloud footprints aren’t exposing themselves to leaking sensitive data requires but a little help from Divvy’s automation Bots.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Your Amazon EBS Snapshots and RDS Data may be leaking sensitive data to the public… And that’s just the tip of the iceberg. Fix it permanently!

A recently published article outlined the careless behaviors of users that is allowing sensitive company data contained in EBS snapshots and RDS services to be leaked into public domain.  AWS has released new functionality to “see” and be notified about these risks via Trusted Advisor.  But, as an experienced technologist working with Fortune 100 companies to deploy enterprise applications to cloud infrastructure, I can tell you first hand that the misconfiguration of RDS and EBS Snapshots are only the tip of the iceberg of how careless set-ups, and a lack of an action based configuration control platform can put your data and infrastructure at grave risk.  S3 buckets, misconfigured firewall ports, improper security groups… This list goes on and on.

The potential financial and reputation losses to companies that don’t proactively manage public access to Cloud infrastructure can be catastrophic. But rest assured, these pitfalls are not the result of an insecure or faulty product from AWS or other public cloud provider. In fact each of these services are specifically designed to enable public exposure when desired. The idiocy in all of these events is that they were completely preventable had a cornerstone tagging and monitoring/action system been put in place.

Don’t watch user mistakes in the rear-view mirror through Trusted Advisor alone… Let’s dig in and let me take you through the foundations of how to pro-actively set-up, monitor, and action your cloud to ensure risky behavior is caught and actioned so that your company doesn’t become the subject of media headlines.

Across clients, I have seen tens of millions of dollars spent annually on ITIL processes and ISO27001 compliance within traditional company data centers.  Each IT asset ID’d, every attribute and detail meticulously tracked and logged. But it never ceases to amaze me that once infrastructure becomes ephemeral (created and destroyed as simply and as quickly as code can allow) that all sense of organization is kicked to the curb and the Wild Wild West ensues.  Exposure to data loss and/or security breach, unbridled growth in costs, and orphaned resources is not a product problem, but an operational problem and one that we as an IT community need address within our organizations.

Managing Cloud resources at scale doesn’t have to come with the traditional organizational bloat, added costs, and process bureaucracy that plagues most organizations in their implementation of ITIL practices. In fact I would argue that those trying to manage ephemeral Cloud infrastructure through traditional practices and CMDB methodologies are outright wrong in their approach and are setting their organizations up to fail… The dynamics of Cloud assets simply change too quickly.

Successful Cloud asset management begins with a strategic asset tagging strategy that is systematically applied and monitored ubiquitously across your Enterprise Cloud(s).  Resources are available by the individual Cloud providers on how to tag and the number of tags allowed by asset.  You can find AWS tagging recommendations here, but to prevent the atrocities of misconfiguration, and bloated costs, an organizational tagging strategy and related use policy must be established.  This document must outline which tags are required by asset, and specific tag formats.  Various articles have been written, but one of the most comprehensive and straightforward white paper on how to establishing a cloud tagging strategy was written by the team at DivvyCloud.

Once a tagging policy is created it must be deployed and enforced.  How your organization orchestrates infrastructure as code will determine how the tags are deployed. Monitored holistically, these tags can be interrogated and systematically used to enforce broader operational policies, with “if-then-this” outcomes.

Let’s assume that we have a policy that states only resources tagged as ENV = PROD + DATACLASS = PUBLIC should be allowed to be associated to a publicly open security group, or configured for public access. We can now continuously monitor for this grouping of tags and take appropriate action when non-compliant assets are discovered, with actions that proactively and immediately quarantine the asset and notify the appropriate team members that the incident has occurred and how to resolve prior to Intellectual Property data loss.

A well implemented tagging strategy, in combination with continuous monitoring, and an action driven compliance engine will cover your entire Enterprise Cloud with real time proactive protection.  In addition to security and peace of mind, it will reduce costs, and drive broader operational efficiencies.  Bottom line, these are table stakes to the Cloud Enterprise at scale, and the cornerstone of effective Cloud Operations.  

————

Thomas Martin is a former CIO, and technology leader of the General Electric Company.  Prior to leaving GE,  Thomas was the Executive Vice President of Application Transformation tasked with moving 9000 legacy workloads to public and private cloud infrastructure.  He has been a leading evaluator, adopter, and advocate of innovative tools and emerging technology that drive effective operation of cloud infrastructure at scale.

CyberTech Fairfax

DivvyCloud sponsored this event.

Dates: June 13, 2017
Location: Capital One Headquarters, McLean, VA

AWS Public Sector Summit

DivvyCloud attended this event.

Dates: June 12-14, 2017
Location: Walter E. Washington Convention Center, Washington, DC

Stop Overspending in the Cloud: Maximizing Cloud Utilization

Cloud spending continues to rise as enterprises increasingly look for ways to optimize their IT infrastructure. According to Gartner Group, more than $1 trillion in IT spending will, directly or indirectly, be affected by the shift to cloud during the next five years.

It’s no surprise that, given the many benefits of shifting to a cloud-based or hybrid cloud model. One of the most touted benefits of moving to the cloud is the cost savings to be had by only using what you need, when you need it. In fact, a recent survey from RightScale found that 53 percent of cloud users cite cost savings as a focus for 2017.

However, despite all the hype around cost savings in the cloud, many enterprises are over provisioned and paying for resources that they don’t use or need. According to RightScale, “on average, the IT pros surveyed said they their organization wastes 30% of its cloud spend. In addition, 39 percent of instance spend is on virtual machines (VMs) that are running at under 40 percent of CPU and memory utilization, with the majority of those running under 20 percent utilization.” This chronic under utilization of the cloud infrastructure is a huge waste of money.

Enterprises often buy more capacity than they need to ensure that they have enough resources to handle their current and future growth (legacy data center thinking). They are also often unaware of what applications are being the most and least utilized which can result in a large amount of unused cloud resources that are constantly running and costing them money.

This is also true as enterprises are increasingly using containers. Containers are meant to be temporary and scope-limited, meaning that they should spin up and spin down as needed. However, the underlying infrastructure that containers run on, is often left to run constantly and therefore destroy the value of the pay as you use cloud business model. This only exacerbates the problem of not fully embracing the utility-based cloud pricing approach.

To avoid this waste, enterprises must be able to start and stop their instances to better utilize computing resources. They also need visibility into their networks and to continuously monitor their cloud spend and utilization to get the most out of their investment.

Cloud utilization continues to be a challenge for enterprises. In fact, some large enterprises are struggling to get 10 percent utilization from their cloud infrastructure. As the cloud matures, cloud business models need to evolve as well. The hope and promise of using only what you need when you need it is not yet a reality for most enterprises and optimizing existing cloud usage needs to be a top priority for all cloud users.

Cloud automation and monitoring tools can help control these extra costs and maximize cloud resource utilization. There are “Bots” that can automatically identify instances that have either been running for a long time or have very low capacity utilization.  Bots can schedule downtime for instances when not in use.  For example, dev/test/qa environments that are not utilized at night.  Or large capacity instances used a few days a month for financial closings, or regular risk assessments.

Enterprises can set up more complex rules for these cloud automation tools to follow. For example, a resize Bot can create a list of all instances less than 5 percent utilized over last 30 days and then resize them to the next smaller level so the enterprise pays for the smaller size, often a 50% savings per instance. The bots will keep doing this until they reach the lowest level available resulting in significant cost savings.

So, does the cloud business model really stand up to its pay-as-you-go claims? Only if you carefully monitor your cloud resources and ensure that under-utilized applications are not running when the don’t need to be.

Eyes are Always Bigger Than Your Stomach in the Cloud: Underutilized Instances Bot

Bot of the Week: Underutilized Instance Bot 

Identifies instances that have either been running for a long time, or have very low capacity utilization…contributing to ever-growing monthly cloud bills.

Why do I care:

The cloud provides nearly infinite resources and computing capacity. In most cases this capacity is offered for pennies per hour. With cheap start up costs and endless space, oftentimes developers, engineers and other users of your cloud infrastructure are buying more cloud capacity than they actually need. While the short-term costs of doing so are low, leaving these resources running unnecessarily or forgetting about them can result in runaway costs in the cloud.  This is a big reason research shows enterprise cloud bills growing 2-3 times over budgeted expectations!

Forgotten Instances Will Come Back to Eat Your Bottom Line

Enterprises in the cloud share a common pain point: virtual instances (that bill out on an hourly basis) are left running when no one is using them. Imagine your utility bill if you left the lights and electronics running all day and all night. Instances are billed in the same way. It’s cheap to spin up instances, but expensive to forget about them.  

Why Test Drive Code on a Kia When You Can Have A Maserati?

More often than not developers want plenty of power to develop and test their code.  With access to the cloud, they can spin up big, beefy servers with little concern for cost.  Meanwhile, product teams and operations folks want to make sure they have plenty of capacity to run their workloads. They most always over-estimate their needs and over-provision capacity. Teams are purchasing 2, 4, or 8 cores of computing power and end up using a very low percentage of that capacity. Consider purchasing a top of the line sports car that can accelerate from 0-100 in just a few seconds. It has incredible horsepower, a sleek design and is made to race. Now imagine driving it in bumper to bumper traffic for 20 miles and never being able to use it’s full potential. The same can be true for over-provisioning in the cloud. You have a whole lot of power and capacity that is going to waste.

Use Insights to Make Better Decisions

Before taking action on these instances, it’s useful to monitor these environments to truly understand how your organization is utilizing the cloud. It may be clear that a test environment requires fewer CPUs or that the resource itself doesn’t have to exist beyond one day, one week or one month. Using insights will allow your organization to make the best choices when adding (or culling) capacity in your public cloud. Data makes it clear whether less cores are better for your environment or if a reserved instance should be considered.

Automate Resizing and Deletion Of Underutilized Instances

For both the case of over-provisioning and forgotten instances, automation is one of the best solutions to reduce cost and waste. By setting policy in an automation platform such as DivvyCloud, users can specify how long an instance should exist before it is automatically stopped and terminated. For example, if an instance in spun up in the dev test environment, it can be set to terminate after 7 days. DivvyCloud’s automation Bots will follow the policy and delete the instance whether or not anyone remembers.

Additionally, resizing can be automated with Bots that identify instances that have run at less than say…10% CPU utilization over the last two weeks, and schedule them to be reduced one size.  If after two more weeks, the instance in question is still under 10% CPU utilization, the Bot will knock it down another size.  And so on, until it the load and capacity reaching a reasonable utilization level (or it ends up at the smallest size instance costing very little). Elasticity is one of the key benefits of the cloud, and having automation that can scale up AND scale down is vital for optimizing cloud usage.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

The Future Looks Good for Google in a Multi-Cloud World

There is no doubt that the cloud is becoming the technology infrastructure of the future. Market research firm Gartner projects that by 2020, cloud computing will be a $383 billion market (that’s damn big). The highest growth will come from cloud-based, software defined infrastructure  (Infrastructure-as-a-Service – IaaS), which is projected to grow 36.8 percent in 2017 to $34.6 billion, according to Gartner.

However, enterprises need to be smart about how they approach migration to the cloud. They need to decide which applications and data should be moved to the cloud and what should remain on-premise or in private cloud deployments. They also need to decide if they want to move everything to one cloud vendor or to multiple cloud vendors. A lot of enterprises initially take the easy path and commit to Amazon Web Services (AWS). However, relying on only one cloud vendor can leave them vulnerable.

The recent Amazon S3 outage wreaked havoc across the Internet and for many it brought to mind the old adage of not putting all of your eggs into one cloud vendor’s basket. AWS has always been the 800-pound gorilla of the cloud vendors and is still the leader of the pack with 40 percent market share (that’s almost double the combined market share of Google, MS Azure and IBM Softlayer…the next three market leaders). Therefore, any slight error or outage can have a huge impact.

Many enterprises are discovering a multi-cloud strategy is a safer bet and there is a lot of excitement for what Google Cloud Platform (GCP) can bring to the table. Google has evolved from its dominance in search engine/online advertising into a cloud computing powerhouse. It is quickly becoming a formidable competitor to Microsoft and AWS as it continues to expand globally. The company recently widened its global cloud footprint with the addition of three new data centers in California, Montreal and the Netherlands.

In addition, at the Forbes CIO Summit in April, Diane Green, senior vice president of Google, said she believes the Google Cloud Platform could surpass AWS by 2022. This appears to be a real possibility as Google’s most recent earnings report stated that its cloud growth is outpacing the company’s ad business.

Dan Bieler, principal analyst at Forrester Research outlined some of Google Cloud’s strengths in this ZDNet article. He believes that Google has a good chance to take on Microsoft and AWS with its global expansions and technology innovation in machine learning and artificial intelligence which are integrated into its cloud platform.

As enterprises increasingly deploy new applications in the cloud, a multi-cloud approach offers more flexibility and security. It is wise to take a closer look at the various features and benefits that AWS, Microsoft, Google and other cloud vendors offer and then distribute your applications across the clouds that are the best fit.

It is also critically important to find ways to manage the risks of “cloud” effectively across different cloud deployments.  Enterprise need new tools and practices to control costs , enforce compliance to meet industry and operational best practices, and maintain visibility across those clouds.  DivvyCloud’s multi-cloud automation solution can do all of this and more. It continuously scans public and private cloud infrastructure, identifies non-compliant resources and automates remediation to increasingly common cloud problems related to security, cost and compliance.

Integrating cloud automation as part of this multi-cloud strategy gives IT managers peace of mind that their infrastructure is secure and an outage or natural disaster won’t bring their entire network down in one fell swoop.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Cloud Expo NYC

DivvyCloud is exhibiting its technology at Cloud Expo NYC in Booth #237 June 6-8. Jeremy Snyder, VP of Business Development at DivvyCloud, is speaking for a general session and the Tech Talks during the event. Jeremy will share best practices, industry insights and cloud infrastructure strategies.

TechTalk: Tagging Strategies for Cloud Resources – How to use Tags for visibility, accountability and automation of cloud infrastructure.

Tagging virtual instances and resources in your cloud infrastructure is a vital step in defining how your public, private or hybrid cloud environments will operate. Because components of cloud infrastructure are software-defined, these “virtual” resources can very easily sprawl out of control resulting in run-away costs, security holes and lack of accountability. Tagging allows resources to be quickly labeled and categorized, creating a standard for organizing your virtual assets across cloud environments. In this talk we will discuss best practices for using tagging in your cloud infrastructure to reduce costs and risks while remain compliant to enterprise standards.

Session:  Best Practices for Enterprise Cloud Adoption – How cloud infrastructure automation delivers the agility, speed and cost benefits while reducing many risks.

This talk centers around how to automate best practices in a multi-/hybrid-cloud world based on our work with customers like GE, Discovery Communications and Fannie Mae. Today’s enterprises are reaping the benefits of cloud computing, but also discovering many risks and challenges. In the age of DevOps and the decentralization of IT, it’s easy to over-provision resources, forget that instances are running, or unintentionally expose vulnerabilities. We will discuss the approaches to take control of your cloud with self-healing infrastructure, while realizing the promised agility, speed and cost benefits of the cloud.

Dates: Jun 6-8, 2017
Location: Javits Center, New York City, NY

Media and Entertainment Companies Flock to the Cloud

Over the past year and a half, there has been widespread cloud adoption among enterprises and many people in the technology industry are claiming that 2017 is the year of the “enterprise cloud.”  According to IDG’s 2016 Enterprise Cloud Computing Survey, “organizations are using multiple cloud models to meet their business needs, including private (62%), public (60%), and hybrid (26%).”

As enterprise cloud adoption grows, best practices and business cases are emerging and other industries like the media and entertainment industry, are jumping on the bandwagon. From virtual reality and smart TVs to live streaming of events, the amount of digital content being created and consumed is abundant. A majority of this massive amount of data is being produced at high resolution and in multiple formats and therefore requires a robust technology infrastructure to support it.

As a result, many media giants are increasingly adopting new cloud technologies to manage the explosive growth of their digital content. As these media companies flock to the cloud to manage and store their digital content, they must be smart about their approach.  For example, some may want to consider a hybrid approach to the cloud. They could use the public cloud for things such as global content distribution and collecting content from providers, but also store some of their more proprietary digital assets in a private cloud or on-premise.

Cloud solutions can create storage efficiencies and allow real-time access to content anytime, anywhere. However, there are also some challenges to be aware of when it comes to migrating to the cloud such as runaway costs and the ability to scale. In addition, the IDG survey identified these top three challenges that enterprises face when adopting public cloud technologies: concerns about where data is stored (43%), cloud security (41%) and vendor lock-in (21%).

Another factor to consider when migrating to the cloud is how to manage the massive amount of digital content being delivered around the world. It is too big to be managed from a human standpoint and there is no room for error.

And, once the transition to the cloud is complete, these companies need to efficiently manage all of their cloud resources to ensure optimal performance. Performance is especially important in the media and entertainment industry. For example, something as little as a six-second delay in the streaming of an advertisement at the beginning of a media company’s hit show could cost them hundreds of thousands of dollars back to the advertiser if the ad is not delivered correctly or at all.

Cloud automation and management technology is increasingly easing this burden and can help media companies more efficiently manage their virtual resources at scale across different public and private cloud technologies. Features such as auto-scaling and workload grooming allow these companies to ensure that their costs are managed and contained. These tools also take the burden off of the IT department by automatically monitoring cloud resources and identifying and fixing issues and security threats as they arise.

Leading media companies like Discovery Communications are using cloud automation technology from DivvyCloud to reduce cost, improve security and ensure compliance across their entire cloud infrastructure. You can visit DivvyCloud at the NAB Show, April 22-27, 2017 where they will be featured as part of SPROCKIT’s 2017 program – an exclusive technology alliance that is helping to accelerate cloud strategies for the hottest companies in media and entertainment.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

DivvyCloud Accepted Into Exclusive Media and Entertainment Technology Alliance for Innovative Companies

DivvyCloud Accepted Into Exclusive Media and Entertainment Technology Alliance for Innovative Companies

SPROCKIT® Brings Together Hot Startups and Iconic Companies at NAB Show and Throughout the Year

Arlington, Virginia (March 23, 2017) – DivvyCloud, a leading developer of innovative software to automate and manage multi-cloud infrastructure, today announced that it has been accepted into SPROCKIT’s exclusive technology alliance for media and entertainment industry collaboration.

SPROCKIT will showcase up to 30 of the most innovative market-ready startup companies through a year-round program that shines the spotlight on industry game-changers through two channels: NAB Show®, the world’s largest and most important media and entertainment event, and SPROCKIT® Sync, the exclusive community of media and entertainment decision-makers that meets three times a year.

To become part of the SPROCKIT Class of 2017, a company must have market validation and must prove the potential to significantly impact the media and entertainment industry.  

The media and entertainment industry is an early adopter of cloud infrastructure to manage the explosive growth of digital content. Like many other industries moving to the cloud, it is not always a smooth road.  Enterprises frequently experience runaway costs, security problems and the inability to ensure compliance of their cloud operations at scale. DivvyCloud’s automation software, BotFactory, has a proven track-record of delivering cost savings and autonomous remediation of non-compliance cloud resources in near-real time.

“We’re honored that our largest media company customers nominated us to be a part of SPROCKIT,” said Brian Johnson, CEO at DivvyCloud.  “We look forward to helping accelerate cloud strategies for the hottest companies in media and entertainment.”

About SPROCKIT

SPROCKIT is the media, entertainment and technology alliance of corporate executives and industry-vetted emerging companies showcasing and collaborating on products, services and new revenue models. SPROCKIT, in collaboration with and support from NAB Show, shines the spotlight on the industry’s most promising market-ready start-ups from around the world. SPROCKIT start-ups have proven products, customers, and are ready to scale. Learn more at sprockitglory.com.

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating security, compliance and cost optimization of public and private cloud infrastructure. Using DivvyCloud, customers can leverage programmatic Bots to identify and remediate common cloud problems in real time. DivvyCloud was founded by seasoned technologists who understand first hand what is necessary to succeed in today’s fast-changing, multi-cloud world. For more information, visit: www.divvycloud.staging.wpengine.com.

Media Contact
Meredith Bagnulo
PR for DivvyCloud
meredith@bagnulocomm.com
(303) 513-7494

DevOps DC Meetup

DivvyCloud spoke and sponsored this event.

Dates: Mar 14, 2017
Location: Online, due to weather

AWS Montreal

DivvyCloud spoke and sponsored this event.

Dates: Mar 9, 2017
Location: La Gare Co-Working Space

AWS Ottawa

DivvyCloud spoke and sponsored this event.

Dates: Mar 8, 2017
Location: Rebel.com

Bad Idea: Leaving the Keys to Your Cloud Castle Lying Around!

Bot of the Week: S3 Bucket Permissions

This bot continuously monitors and identifies storage containers such as AWS S3 buckets which have read, write or delete permissions open to the world.

Why Do I Care?

Amazon Web Services S3 Buckets are storage containers in the cloud that are used to house data, documents and images or they can be used to host static websites. Without specified permissions, anyone can read, modify or delete a bucket. Running buckets with this type of access policy can result in data loss, exposure and potential downtime in the case of static website hosting.

Permissions matter

With S3, you can put access permission controls on your buckets. This governs who can read, write, and delete that bucket. Let’s say you’re hosting a website from S3. You’ll want to provide visitors full access to read the information on the site. You wouldn’t, however, want them to be able to modify or delete your content. On the other hand, if you have a bucket that stores personal information or sensitive information such as customer records, you may want to pull read permissions from your policy. By setting and automating specific permissions you can prevent the viewing or altering of stored data, protecting the organization and its clients.

Guard your buckets

The best way to guard your buckets is to tailor access list control permissions to only the minimum level of access required. Typically only static websites should be open to the world and all other buckets should be locked down to only authorized stakeholders.

Give it a try!

Over 100 out-of-the-box Bots are available on the DivvyCloud Github repo.  Sign up at BotFactory.io for a free test drive.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

AWS Toronto

DivvyCloud spoke and sponsored this event.

Dates: Feb 23, 2017
Location: theScore

Don’t lose track of your instances, tag them.

Bot of the Week: Tagging Audit Bot

 

This bot inspects cloud resources to validate that they are tagged with appropriate key/value pairs (e.g., “Environment: Production”). The policy can be applied as a global policy, or fine-tuned to accommodate for different strategies per cloud/account/resource-type/etc.. The definition and enforcement of tagging standards to organize your cloud infrastructure can dramatically improve visibility, compliance, charge/show back, and taxonomy across your entire footprint.  Combined with DivvyCloud Bots, tags form the foundation for policy automation and operational compliance at scale.

Why do I care?

Tagging instances and resources in the cloud is a vital step in defining how your public, private or hybrid cloud environments will operate. Because components of cloud infrastructure are software-defined, these “virtual” resources can very easily sprawl out of control, resulting in run-away costs, security holes and lack of accountability. Tagging allows resources to be quickly labeled and categorized, creating a standard for organizing your cloud. Tags can be created to identify the environment, cost center, resource owners, projects, security levels, and almost any other attributes that are important to your operating model.

Our Tagging Audit Bot ensure all resources are tagged appropriately with valid values, ensuring things don’t get lost in dynamic cloud environments. Tagging has become so critical to cloud management that AWS recently increased its maximum number of tags per resource from 10 to 50 to meet customer demands (which is a good thing since their published tagging strategy requires at least 20 tags!).

With new clients, we often run the Tagging Audit Bot first thing to show how much of their infrastructure is “undefined”.  By quickly implementing a basic tagging strategy, the customer can finally get actionable data on their cloud environments.

The cloud is not a datacenter

Unlike the traditional datacenter, the cloud is a dynamic and ever-changing environment that has the ability to morph and transform at will. Cloud self-service provisioning allows users with various positions, departments and skill levels to access and change the cloud environment as they see fit. All this considered, the cloud cannot be treated like a physical datacenter where you can actually attach a physical tag or label.  Overtime the cloud becomes increasingly unorganized and insecure, resulting in wasted resources and vulnerabilities.

Next Level: Tagging Strategy

Once all resources are labelled, creating processes and automation around those tags becomes simple and can make operating in the cloud dramatically more efficient. AWS provides a comprehensive strategy for tagging on their site that describes how tags can be used to categorize resources. With BotFactory, we allow users to create and automate actions according to the tags used. For instance, Scheduled Instances Bot can automate shut down at 5:00 P.M. to 9:00 A.M for resources tagged as “Environment: Dev” or “Schedule: 9-5”. Another set of tags can identify resource that need special compliance standards such as HIPAA or SOX with associated data encryption or data sovereignty requirements.  All of our 100+ Standard Bots, and any user configured Bots can leverage tags used on your cloud resources.

Enterprise adopters

Our team has learned from the cloud’s earliest enterprise adopters. These organizations were tasked with moving thousands of instances to the cloud while keeping their data protected and their costs under control. DivvyCloud customer General Electric uses tags for it’s Reaper Bot to scan the environment to make sure all assets in the cloud are attached to an application (learn more in the video Q&A). Similarly, Discovery Communications has used Tagging Bots during their migration process to the cloud (video Q&A). Bots have helped both companies minimize their costs while maintaining order and compliance in the their infrastructure.

 

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

BotFactory Now Available for Test Drive on Google Cloud Platform Using Orbitera

BotFactory Now Available for Test Drive on Google Cloud Platform Using Orbitera

Cloud Adopters Can Test Drive BotFactory on GCP to Automate Security, Compliance and Cost Optimization

Arlington, Virginia (January 23, 2017) – DivvyCloud, a leading developer of innovative technology to automate and manage cloud infrastructure, today announced availability of the BotFactory Test Drive on Google Cloud Platform (GCP). DivvyCloud has been one of a few partners working with Orbitera as part of their beta program for Test Drives on GCP.

Orbitera Test Drives are fully functioning and interactive software demonstrations offered in the cloud. This is a popular way for businesses to try software before buying it — no software license, credit card or even cloud account required.

“We are excited to make BotFactory available for easy, one-click test drives on GCP”, said Peter Scott, COO of DivvyCloud. “Orbitera’s Test Drive solution lowers the barriers to try out new software solutions and we are happy to be one of the first three software providers to go-live on GCP.”

DivvyCloud’s BotFactory solution is unique in the marketplace with its ability to track real-time changes within cloud infrastructure and take customer-defined, automated actions to fix problems and ensure policy compliance. Customers can leverage over 100 standard automation Bots to address a wide range of security, compliance and cost optimization challenges commonly faced by any organization adopting cloud infrastructure.

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating security, compliance and cost optimization of public and private cloud infrastructure. Using DivvyCloud, customers can leverage programmatic Bots to identify and remediate common cloud problems in real time. DivvyCloud was founded by seasoned technologists who understand first hand what is necessary to succeed in today’s fast-changing, multi-cloud world. For more information, visit  www.divvywebsite.staging.wpengine.com.

Media Contact
Meredith Bagnulo
PR for DivvyCloud
meredith@bagnulocomm.com
(303) 513-7494

Instances with Failed Status Checks

What it does: matches instances that fail the system/reachability status checks

This bot identifies compute instances which fail instance/system reachability. When failure occurs this means that the system is not accessible over the Internet and likely is running failed hardware. It is strongly encouraged to migrate these failed systems to new hardware.

Why do I care?

Monitoring the lifecycle state and status checks of your cloud instances ensure that your systems are running properly AND that you have access to the compute capacity that you are paying for. Instances that fail status checks can result in downtime for your organization and wasted money.

Failed Instances Bot checks your system every 10 minutes and automatically migrates your data from failed or failing hardware in AWS.

Why do failed instances occur?

Within Amazon there are two states for availability: lifecycle state and status checks. The lifecycle state defines whether an instance is running, stopped or has been deleted. Status checks determines whether the virtual instance your application or data is running on is working properly. Amazon sends periodic heartbeats to the underlying hardware, at the process, hypervisor and network layers to test for status checks and lifecycle. If you fail any one of those checks your instance is unreachable, resulting in unusable data.

Status checks are an important because many people only monitor the lifecycle state of their instances, and do not monitor network accessibility.  Turning on this DivvyCloud “Instance with Failed Status Checks” Bot is very useful, because most monitoring detects lifecycle changes and not status checks. In the situation of a failed status check, organizations typically migrate the system to a different droplet, or need to conduct deeper inspection and remediation.

It’s not uncommon for hardware to fail

Hardware failure can come from a number of reasons. Equipment can get old, moving parts can break, overheating can occur and hard disks can fail. These failures are not uncommon at scale, and knowing this can help your organization better prepare for and react to failures. If you don’t have automation in place to detect system failures, it can be hard to find the affected systems. Cloud providers may email you, but often not fast enough for applications high uptime requirements. It could be an hour or two before you are alerted about the problem. Our Bot will identify failed instances in a maximum of 10 minutes using BotFactory’s continuous API-based data harvesting. This is in much more near real-time than if you wait for the AWS monitoring system to catch the failure.

Give it a try!

Over 100 out-of-the-box Bots are available on the DivvyCloud Github repo.  Sign up at BotFactory.io for a free test drive.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

2017 NAB Show

DivvyCloud will have a SPROCKIT booth and be exhibiting this event.

Dates: Apr 22 – Apr 27, 2017
Location: Las Vegas Convention Center

DivyCON2

DivvyCloud is excited to announce our second customer event! DivvyCON2

Dates: Spetember 2017
Location: Rosslyn, VA

AWS re:Invent 2017

DivvyCloud will have a booth and be exhibiting this event.

Dates: Nov 27 – Dec 1, 2017
Location: Las Vegas Convention Center

SCHEDULED INSTANCES BOT

What it does: Schedules automatic starts/stops for cloud resources to save money

The Scheduled Instance Bot provides the ability to configure daily stop/start schedules for compute infrastructure. It’s goal is to drive down monthly overspending (by as much as 50%) which results from instances running when they are not needed, often at night or on weekends. Different configurations and policies can be set for specific times, days and/or cloud environments, giving the organization maximum flexibility. Scheduled downtime policies can be configured to allow cloud users to opt-in/opt-out of auto-schedules via tagging. This can be great for developers and stakeholders who are working late/off-hours.

Why do I care?

The worst thing you can do in a public cloud world is run it like a “virtual data center”. You don’t want resources running 24/7/365, and you really want to avoid paying good money for unused capacity. Idle compute capacity typically accounts for 40 to 45 percent of a company’s public cloud bill. Unlike purchasing a physical data center, cloud usage is charged like a utility bill, you pay for exactly what you “turn on”. It can be easy to spin up and forget about instances, resulting in excessively high monthly bills and operational waste.

Scheduled Starts and Stops

Because it is so easy to spin up instances in a cloud ready world, it is not uncommon for forgotten resources to end up running 24/7 for days, weeks or months!  That means the organization pays for every unused and over-provisioned resource in your cloud, which can cost a lot of money in the long run.  Public Cloud monthly bills often grow  2-3X’s the expected budget in no small part due to excess capacity running when it’s not needed. 

This can be addressed by scheduling downtime at the end of the day and spinning instances up again the following morning.  Just like your light bill, you don’t want to leave the lights on if you’re not home. When dev environments are not in use at night, shutting them down saves plenty.   Here’s some simple math:

500 Development Instances

Shut down from M-F 10pm – 8am = 50 hrs

Shut down Sat & Sun  = 48 hrs

500 Instance X $.12 X 98 hrs = $5,880/wk

58% Savings

Here at DivvyCloud, our devs wind down about 10 or 11pm.  We simply shut down our development environments at midnight and leave the instances off.  Developers can easily turn them back on when they get rolling the next day.  Or three days later then they revisit that specific project.  Meanwhile, we don’t pay for unused resources.  We have a large customer that follows the same strategy. The cost savings are impressive.

Starting and stopping is not always the best solution for every usage concern. Sometimes a test environment is needed for a developer account and will only be used for a few hours in one day. A deletion can be scheduled to terminate those instances. Accounts can even have policies that flag it for deletion after a set number of days, an ideal option for temporary use of instances for simple tasks and reports.

Empowering Users with Tagging

Altering policies for multiple accounts can be a headache if a team wants to work outside of scheduled hours. Using tagging strategies to opt in or out of scheduled stops gives dev teams more flexibility in the cloud. These tags can be used to temporarily alter shut down policies. Since your environment is always being pervasively scanned, the original policy will be reactivated, shutting down the instances at the established time.

Give it a try!

Over 100 out-of-the-box Bots are available on the DivvyCloud Github repo.  Sign up at BotFactory.io for a free test drive.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

OpenStack Meetup DC

DivvyCloud spoke and sponsored this event.

Dates: December 15, 2016
Location: MakeOffices

TechBreakfast NYC

DivvyCloud spoke and sponsored this event.

Dates: December 13, 2016
Location: New York City

CHECK EXPIRED SSL CERTIFICATES

What it does: Identifies expired / soon to expire SSL certificates

This bot identifies SSL certificates which are set to expire and/or have already expired. When this occurs it can present end-users accessing web-based applications with browser warnings and errors indicating that the site they are accessing is not secure.

Why do I care?

If your company provides a public facing service, visitors want to ensure their personal information is safe when they enter sensitive data into your website/app. SSL Certificates are one way customers know their information is safe. Websites that begin with “https” possess the SSL certification that guarantees customers data is encrypted and inaccessible to hackers. Allowing these certifications to expire put visitors at risk resulting in a poor user experience and potential lost revenue when users do not feel comfortable doing business on your site.

Get New Certs Before They Expire

SSL certificates are usually set to expire every one, two or three years.  It is easy to forget or lose track of when Certs need to be renewed throughout the year.  It is very common to get blasted by the product, or customer support teams as complaints start rolling in from users or abandonment rates spike.

With the Check SSL Certificate Bot you will be reminded ahead of time so you can generate a new SSL certificate via your trusted CA (Certificate Authority) and upload it to the cloud account. Doing this will ensure that end-users do not receive SSL identity warnings/errors when accessing secure sites/applications. There are only a handful of Certificate Authorities that browsers recognize and trust.

Keep Hackers from Sniffing Your Customers’ Information

Without proper encryption and SSL certificates, it’s not hard for a hacker to start “sniffing” around for your customers’ usernames, passwords or bank account information.  SSL Certificates are like a digital fingerprint, proving to the customer that your business site is secure. There’s a validation process that occurs that let’s the customer know it is safe to put their personal information in the site. Without up-to-date certification, site visitors will receive a message, warning them that the site the are visiting is not secure and that there is a certain level of risk they are taking doing business on your site.

Give it a try!

Over 100 out-of-the-box Bots are available on the DivvyCloud Github repo.  Sign up at BotFactory.io for a free test drive.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

AWS re:invent 2016: What’s new in the cloud?

Amazon Web Services held its fifth annual AWS re:invent conference in Las Vegas. This year’s event was not only packed with attendees and vendors, but it was also chockfull of news and announcements. The public cloud provider released dozens of new features and services that ranged from new instances and storage capabilities, to migration and management tools.

While AWS is the leader public cloud, this year it is evident that AWS realizes its customers’ needs are changing. Unless AWS starts playing nice with other platforms, and provide integration tools, customers can find those capabilities from other vendors.  The latest offerings form AWS are aimed at the demand for tools that make their experience easier, faster, safer and cheaper.

The cloud is so important today because of what can be done with it. With the growth of IoT (internet of things), AI (artificial intelligence) and other computing innovations, the demand for platforms and systems that make operating in the cloud efficient is higher than ever. AWS seems to understand that demand and has created solutions within its platform so that its customers have options right within AWS. This also demonstrates Amazon’s understanding of what it takes to keep their customers operating mainly, if not exclusively, in their cloud.

Here are some highlights of the focus areas of AWS’s latest cloud features.

Migration

Moving data at scale to the cloud can be a huge challenge. Enterprises could be migrating terabytes or even petabytes of on-premise data to the cloud, which requires lots of time, devices, and stress.

AWS Snowmobile was created to make that transition easier. Snowmobile can store up to 100 PB of data which allows customers to move exabytes to AWS in less than time than ever before. The data is secured with layers of protection and encrypted with AWS’s Key Management Service.

Management Tools

Once data is migrated to the cloud, how the data is managed is very important. The market is steadily filling with platforms that streamline processes, reduce costs, enable or disable tracking and auditing abilities and more. AWS introduced five new features and services around cloud management. These services offer inventory managers, cloud health dashboards, Chef automation (another chance for users to incorporate outside technologies), data auditing and provisioning.

Hybrid with VMware

In order to make working within AWS better for VMware customers, VMware on AWS Cloud was created. This service allows VMware customers to run and deploy their workloads on AWS seamlessly. Hybrid capabilities are especially attractive for companies that have reason to maintain data between various clouds.

Security, Identity & Compliance

Three new services are available for security and compliance. These additions focus on generating compliance reports, creating security groups and limiting attacks through AWS shield. It will be interesting to see if security updates and services continue to be a focus with security being the biggest concerns for most customers.

More and more services makes it hard to break away from AWS

Although Amazon’s latest announcements are great news for the company and the AWS customers, they are all AWS specific. The market has shown that today’s enterprise clouds are either public, private or hybrid. There’s a good chance that organizations use cloud resources outside of or in addition to AWS. This means organizations will still have to look outside of AWS’s multitude of offers to make workloads in other clouds just as efficient.

Software and platforms that are cloud agnostic serve a unique purpose to the market. Maintaining fluidity between clouds will continue to be important as organizations design and implement their own cloud strategies. Automation and management across clouds is a necessary capability for a cloud environment with unlimited possibilities. If Amazon stays on target with hybrid offers such as its VMWare service, then it will be able to serve the growing need in the market. If its products remain focused only on AWS, customers my feel the need to choose other options to gain freedom in the cloud.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Discovery Communications Invests in Cloud Automation Start-up DivvyCloud

horizontal_color_200

Discovery Communications Invests in Cloud Automation Start-up DivvyCloud

DivvyCloud’s BotFactory Provides Visibility and Automated Remediation to the Growing Scope of Challenges Companies Face in Adopting and Scaling Cloud Infrastructure

Arlington, Virginia (November 29, 2016) -DivvyCloud, a leading developer of innovative technology to automate and manage cloud infrastructure, announced today that Discovery Communications has invested in its latest round of funding, contributing to the more than $3M the three-year-old startup has raised from venture capital and angel investors to date.

Discovery Communications made the decision to invest in DivvyCloud after being an enterprise customer for over a year. “Given the value that DivvyCloud has delivered to Discovery in our adoption of the cloud, we see a real potential for growth that we’re excited to be a part of,” said John Honeycutt, Discovery Communications Chief Technology Officer. “The leadership team at DivvyCloud brings a compelling mix of deep technical expertise and operational experience to support critical enterprise solutions at scale.”

DivvyCloud’s BotFactory solution is unique in the marketplace with its ability to track real-time changes within cloud infrastructure and take customer-defined, automated actions to fix problems and ensure policy compliance. Customers can leverage over 100 standard automation Bots to address a wide range of security, cost and compliance challenges commonly faced by any organization adopting cloud infrastructure.

The underlying DivvyCloud platform is fully extensible, allowing developers the freedom and flexibility to address specific use cases and integrate other enterprise systems or data sources. The platform harvests and normalizes data and lifecycle controls across all the leading cloud platforms (AWS, GCP, Azure, OpenStack, VMware and others) to reduce complexity and protect customers against cloud vendor lock-in.

“We are very pleased with the confidence Discovery has shown in DivvyCloud by making this investment,” said DivvyCloud CEO Brian Johnson. “Technology companies increasingly rely on the cloud to deliver value to their customers and DivvyCloud helps ensure these cloud strategies scale effectively.”

About Discovery Communications
Discovery Communications (Nasdaq: DISCA, DISCB, DISCK) satisfies curiosity and engages superfans with a portfolio of premium nonfiction, sports and kids programming brands. Reaching 3 billion cumulative viewers across pay-TV and free-to-air platforms in more than 220 countries and territories, Discovery’s portfolio includes the global brands Discovery Channel, TLC, Investigation Discovery, Animal Planet, Science and Turbo/Velocity, as well as OWN: Oprah Winfrey Network in the U.S., Discovery Kids in Latin America, and Eurosport, the leading provider of locally relevant, premium sports content across Europe. Discovery reaches audiences across screens through digital-first programming from Discovery VR, over-the-top offerings Eurosport Player and Dplay, as well as TV Everywhere products comprising the GO portfolio of TVE apps and Discovery K!ds Play. For more information, please visit www.discoverycommunications.com.

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating compliance and optimization of public and private cloud infrastructure. Using DivvyCloud, customers can leverage programmatic Bots to identify and remediate security, cost and scale problems in real time. DivvyCloud was founded by seasoned technologists who understand first hand what is necessary to succeed in today’s fast-changing, multi-cloud world. For more information, visit www.divvywebsite.staging.wpengine.com.

Media Contact
Meredith Bagnulo
PR for DivvyCloud
meredith@bagnulocomm.com
(303) 513-7494

AWS re:Invent 2016

DivvyCloud had a booth and exhibited at this event.

Dates: Nov 28 – Dec 2, 2016
Location: Las Vegas Convention Center

Solving the 10 AWS security blunders automatically with bots

Recently, Fahmida Y. Rashid wrote a great piece on InfoWorld, titled “10 AWS Security Blunders and how to avoid them.” While not an exhaustive list, it is a very useful guide towards a security-centric approach to using cloud computing platforms.

These are extremely common mistakes that we see everyday working with customers, and mistakes that many customers aren’t even aware that they are making. That lack of awareness is a real concern. That’s why solutions like BotFactory that automatically find these vulnerabilities in customer environments, and can automatically correct them, are so valuable.

I’ve mapped out BotFactory’s capabilities for 9 of the 10 blunders Fahmida highlighted in her article.  The first item in the list, “Who’s in charge of security,” is more about awareness.  The other 9 items are specific security issues where we can apply automation Bots to ensure compliance.

1. Who’s in charge of security?

First and foremost, understanding the shared responsibility model is a must. Realize that AWS is responsible for the security “of” the platform, and you, as a customer, are responsible for keeping your company safe “in” the cloud. That leads to the obvious follow up question: In your company, WHO is responsible for security?

This may be an individual, such as a Chief Information Security Officer (CISO), or it could be a team. But without the understanding of who owns security, no one does. This is an organizational challenge, not a technical one. As such, there is no bot for this one.

2. Forgetting about logs

As the article points out, having complete logging on your account is important for many reasons:

CloudTrail can be used for security analysis, resource management, change tracking, and compliance audits.

When AWS accounts are compromised, CloudTrail is typically the first thing that a hacker will disable. This allows the hacker to cover their tracks, and obscure their actions on your account. A thread on Quora about a hacked account validates this approach.

BotFactory includes a bot to identify AWS accounts that have CloudTrail disabled. The user can choose a response, including options such as a push notification or generating a report.

3. Giving away too many privileges

Organizations need to separate user permissions from application permissions. Often, in their daily work, people run into misconfigurations and it’s very tempting to just open permissions to everything to let people get stuff done. However, this is a critical security vulnerability.

BotFactory includes a bot called “Cloud User Policy Audit”. This allows the company to find and fix these problems in near real-time.

4. Having powerful users and broad roles

The challenges of policies and role-based permission schemes for many companies is that they are confusing. They can be assigned to user accounts, service accounts, or directly to resources like EC2 instances and S3 buckets. So while this often does require planning and thought, BotFactory provides multiple bots to ensure security on resources. For example, there are bots to ensure that an S3 bucket does not have global permissions, and to ensure that user accounts don’t have privileged policies.

5. Relying heavily on passwords

Passwords are a common source of vulnerabilities on any IT system, from a single laptop to the most complex multi-tier applications. AWS, and other cloud platforms, typically provide tools to help companies manage them. However, keeping in mind the shared responsibility model, it’s then up to the company to use that. BotFactory provides multiple bots to help companies manage this process. Two examples bots are:

  • Clouds with weak password policy: lets the company define and enforce length and complexity requirements
  • Ensure MFA is enabled on user accounts: ensures that accounts not using two-factor authentication (2FA or MFA) are identified in real-time.

6. Exposed secrets and keys

A cloud best practice is to create IAM accounts for services and applications. And then these keys need to be managed, and rotated regularly. A useful bot here is “Cloud User API Key Audit”. This allows companies to target service accounts by key age, and enforce proper key rotation. Multiple copies can made for various environments, such as test/dev versus production, or various needs, such as service accounts versus user accounts.

7. Not taking root seriously

Repeat after us – DISABLE ROOT now. Unsurprisingly, the bot to find and disable root is called “Clouds with active root account”. In under 10 seconds, companies can find and take action on the accounts that have root enabled.

8. Putting everything in one VPC or account

While a single account makes management easier, it is a potential security risk. DivvyCloud’s bots make the cross-account management easy for customers. All bots can be applied to individual accounts, multiple accounts, or all accounts, with just a few clicks. There’s no need to shy away from spreading resources out or isolating applications and data to provide for enhanced security.  BotFactory applies automation policy compliance consistently across clouds and cloud accounts.

9. Leaving wide open connections

This is one of the most common problems across all clouds, not only AWS. For ease of use, and for public web service consumption, many companies just open to the whole Internet, namely 0.0.0.0/0. The BotFactory “Audit Security Group” Bot lets companies define targeted ports that should never be open to the the world.

The power of this bot is that it goes beyond simply finding these open ports, and closes them. Customers can choose additional actions like logging to an audit trail, generating notifications or integrating into third-party tools via API.

10. Skimping on encryption

Last but not least, what are you storing in the cloud? And shouldn’t you protect it? DivvyCloud provides bots for verifying and enforcing encryption on EBS, RDS and RedShift.

Bot of the Week: STALE SNAPSHOT AUDIT

What it does: Identify snapshots which are older than a defined threshold of days

This bot looks for snapshots of compute volumes such as AWS’s Elastic Block Storage service which are older than a defined set of days such as 30. The value can be changed to accommodate the retention policy best suited for each organization or use case.

Why do I care?

Snapshots are backups of your hard drive, or of a network attached volume. Most organizations capture snapshots daily to ensure the recovery of its most recent information if that becomes necessary. But what happens to the old snapshots? After a week, a month or a year, old snapshots are not only useless, but they take up a lot of space, and can cost a lot of money.

Snapshots are not free

The cost of storing snapshots is only a fraction of penny per gig of storage. That doesn’t seem like much, but the costs can add up very quickly. On an enterprise level, on organization could have hundreds of thousands of snapshots, each containing more well over a gig of data. DivvyCloud customer GE, for example, has over half a million snapshots across the enterprise (granted a very, very large enterprise). But even for SMBs, costs could reach several hundred dollars a month. Enterprises will see much greater savings.

Risk of old data

Companies have data retention policies for a reason.  On the one hand, you need to be sure you have backups of your data. On the other hand, keep data around forever is a huge liability. Every data asset that your company maintains is data that has to be secured and kept compliant for regulators and law enforcement. Reducing the attack surface of your data in line with your organizational data retention policy is the best way to keep your costs aligned with budget and ensure your data liability is exactly where you want it to be.

Get rid of stale snapshots

With regular, daily snapshots, it is not necessary to keep old ones beyond, 90, 60 or even 30 days. The Stale Snapshots Bot automatically removes stale snapshots from the cloud,to free up space and reduce the cost of operating in the cloud. This Bot can be customized to delete stale snapshots as often as an organization sees fit. Customers can create multiple copies of the bot to target differing needs, such as longer data retention for production environments and shorter retention for test environments.

Give it a try!

Over 100 out-of-the-box Bots are available on the DivvyCloud Github repo.  Sign up at BotFactory.io for a free test drive. 

DivvyCloud Launches BotFactory.io for Cloud Infrastructure Automation and Optimization

DivvyCloud Launches BotFactory.io for Cloud Infrastructure Automation and Optimization

Hosted BotFactory.io delivers over 100 Bots to automate remediation of common cloud problems related to security, cost and compliance

Arlington, Virginia (November 16, 2016) – DivvyCloud, a leading developer of innovative technology to automate and optimize cloud infrastructure, has launched BotFactory.io to help Amazon Web Services (AWS) users close security gaps, save money and ensure compliance with best practices.

BotFactory.io is specifically designed to support small- and medium-sized organizations as well as security, devops and IT teams within larger enterprises. The solution continuously scans public cloud accounts for a wide variety of security and compliance issues, providing users with up-to-the-minute visibility into the health of their cloud infrastructure. Users can employ a virtual Bot Army to take autonomous, real-time action to remediate common problems based on best practices and user-defined policies.

Users can run BotFactory.io’ s 100+ Bots to identify issues and fix problems within their AWS account at no cost. Pricing tiers are available for multiple cloud accounts, additional users and to automate broader cloud footprints.

Examples of common cloud problems addressed by BotFactory.io:

  • Network Security – auto detect and remove non compliant security rules
  • Shadow IT – eliminate orphan resources thereby reducing cost and attack surfaces
  • Downtime – turn off dev/test instances at night for upwards of 50% monthly savings
  • Data Protection – limit instances to approved cloud regions and enforce proper
    encryption of databases

DivvyCloud and its active user community continue to extend BotFactory.io by adding new filters, actions and integrations, thereby allowing users to address new risks, opportunities and increasingly complex challenges with public cloud deployments today and into the future.

“BotFactory has been delivering great value to our enterprise customers like General Electric and Discovery Communications,” said DivvyCloud CEO Brian Johnson. “We are excited to enable broader adoption with our hosted BotFactory.io solution. Any customer of public cloud platforms can get value from BotFactory.io within a matter of minutes, no matter their skill level or cloud size.”

Sign-up or learn more at BotFactory.io

DivvyCloud will be showcasing BotFactory.io at AWS re:Invent, November 28 – December 2, 2016 in Las Vegas.

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating compliance and optimization of public and private cloud infrastructure. Using DivvyCloud, customers can leverage programmatic Bots to identify and remediate security, cost and compliance problems in real time. DivvyCloud was founded by seasoned cloud technologists who understand first hand what is necessary to succeed in today’s fast-changing, multi-cloud world.

Media Contact
Meredith Bagnulo
PR for DivvyCloud
meredith@bagnulocomm.com
(303) 513-7494

Bot of the Week: CLOUDS WITHOUT API AUDITING SERVICES

What it does:  Identifies accounts with API auditing services such as AWS CloudTrail inactive/disabled across all regions.

This bot inspects all configured cloud accounts for the presence of API Auditing services such as AWS CloudTrail across all regions. Having this enabled ensures that all cloud activity both within the native cloud console as well as via the programmatic API are captured for audit and tracking purposes.

Why do I care?

Within cloud accounts, enabling API audit services such CloudTrail will track all changes within the cloud, who made those changes and where those changes were made.  This feature is useful when a server breaks down, or there is an accidental deletion or modification of a resource. It can also identify the IP address the modifications came from so that that user can be addressed directly.

These auditing services are often required for compliance with industry standards like HIPPA, PCI and other best practices.  Without audit services enabled, it’s possible that malicious changes can be made and never detected. This can result in permanently deleted or changed files, unauthorized access to your account or the complete shutdown of servers. Services such as CloudTrail not only document changes from the cloud console, it also sees changes effected via API or third party tools, ensuring all activity is recorded and audited.

This simple Bot from DivvyCloud will ensure Cloudtrail or other audit services are enabled globally and immediately re-enable the service if it is ever shut off, preventing unaudited changes to your cloud infrastructure.

Malicious attackers will disable Cloudtrail

If a malicious users wants to compromise a cloud account they will immediately turn off Cloudtrail so the account owner has no way of knowing what’s going on inside the cloud. The Hacker can infiltrate data, delete resources and shut down servers, and it would be nearly impossible to go back and figure out who made the attack, where they made the attack or what they attacked.

From a compliance perspective, having Cloudtrail enabled is a requirement when dealing with sensitive data. For organizations that store highly sensitive data such as government, medical or financial organizations, having Cloudtrail enabled is a must to remain compliant.

Prevent attacks from unauthorized parts of the globe

API Audit services are often enabled per region so it is easy to lose track of which regions has the service enabled and which do not.  This Bot will enable and activate an API audit configuration in every region for all cloud accounts.

One of the most useful capabilities of API auditing is the ability to only allow modifying activity from IP addresses in specified geographic regions. If activity is detected outside of that geographic location, it can be stopped immediately. This will help identify and mitigate external threats and potential intrusions to your cloud footprint.

Give it a try!

Over 100 out-of-the-box Bots are available on the DivvyCloud Github repo.  Sign up at BotFactory.io for a free test drive.

 

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

DivvyCloud for DevOps: A Match Made in Heaven

What is DevOps?

Dev/Ops, also known as development and operations, or the consolidation of development activities with ongoing operations, is the idea that development teams and operations teams should work together closely on launching and running applications.

What is DivvyCloud?

DivvyCloud is a leader in cloud automation and policy enforcement. Organizations and enterprises with large, complex cloud environments use DivvyCloud to automatically:

Curate and clean up cloud environments

Monitor and secure cloud infrastructure

Control cloud costs and provide visibility into cloud assets

How does DivvyCloud help organizations that have adopted DevOps?

DevOps teams launch applications into server environments. For speed and agility, most DevOps teams adopt the cloud as their infrastructure of choice. Because of this, the organization’s security and policy needs can be enforced and automated, leveraging the SDN and cloud APIs. DivvyCloud can secure the perimeter for the DevOps infrastructure. This includes things like:

Inspecting firewalls and network configurations for any problems, and automatically fixing problems

Verifying proper use and rotation of infrastructure security credentials and service accounts

Automatically starting, stopping and cleaning up the underlying infrastructure resources (compute and storage)

24:00

Wistia video thumbnail – Webinar: Security Bots

24:00

Wistia video thumbnail

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Bot of the Week: SECURITY RULE AUDIT BOT

What it Does: Identify and Close Ports Open to Unauthorized Networks

This bot inspects all configured resource access lists such as AWS Security Groups, Azure Network Security Groups and Google Compute Engine Firewalls for the presence of ports/protocols which are open to the world. Examples of services which are inspected are SSH (TCP 22), Redis (TCP 6379), MySQL (3306) and Microsoft RDP (3339). The configuration of this bot can be fully customized.

 

Why do I Care?

Launching cloud-based services means you have to open access to your infrastructure to allow people and programs to communicate with your cloud resources. Think of your cloud infrastructure as your office building. Some people have keycards that allow them access to the building. If you don’t lock the doors, anyone can walk in, with or without the key. The cloud is the same way.

Ports can be opened for a number of reasons. Classic case is a developer trying to get some work done on Starbucks’s public internet access.  Or a rogue cloud being thrown up in another department in your organization. Sometimes the accidental or careless actions of internal users causes the greatest risks.

 

Trolling for Open Ports

Leaving an unwanted port open to the world, such as SSH leaves the system susceptible to attacks. Attackers can hack into your system, log in and do anything they want from hosting bad code, to adding your servers a malicious Botnet for DDOS attacks, or simply accessing your sensitive data.  At DivvyCloud, we’ve found it take less than 5 mins for trolls and bad actors to find an open port and start to compromise cloud systems.

 

Customize Your Bot’s Response to Open Ports

With BotFactory automation and security rules you can define who can come in and out and from where, and what ports should be closed or open to specific networks using black/white lists. The Security Rules Audit Bot will inspect your cloud infrastructure and continuously monitor, giving you near real time detection of non-compliant resources, straight out of the box. The bot then provides you with a report card of your cloud infrastructure, listing all non-compliant resources and open ports.

 

Take Automated Action to Fix the Problem

This bot is also fully customizable and can be configured to take automated action to remediate problems. Actions can be managed across various environments, allowing for environment-specific actions. Bots can send an alert to IT or delete the offending security rule in real time. For example, an unauthorized open port to your production environment can expose customer data and revenue generating workloads so immediate action to delete noncompliant resources is warranted.  A developer on the other hand might open a port to a development environment to test their code so an email notification and scheduled action to remove the rule in 24 hours might be most appropriate.  The bot can also be configured to log security actions in other enterprise tracking and monitoring tools such as splunk.

 

Give it a Try!

Over 100 out-of-the-box Bots are available on the DivvyCloud Github repo.  Sign up at BotFacotry.io for a free test drive.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

AWS Washington, D.C.

DivvyCloud spoke and sponsored this event.

Dates: September 26, 2016
Location: AOL’s HQ

DivvyCloud joins Google Cloud Platform Partner Program

horizontal_color_200

DivvyCloud joins Google Cloud Platform Partner Program

DivvyCloud automates cloud infrastructure to auto­correct security and optimize costs

Arlington, Virginia (September 20, 2016) DivvyCloud, enabling public cloud adoption strategies with its leading automation technology, today announced that it has officially become a Google Cloud Platform technology partner.

“With a growing shift towards containerization among our enterprise customers, we felt it was time for us to build a closer relationship with Google ­ the clear leader in many aspects of container strategies. Also, we are impressed with the renewed focus we’ve seen from Google with the goal of enabling the movement of enterprise workloads to GCP,” said Peter Scott, Chief Marketing Officer at DivvyCloud

DivvyCloud’s support for GCP, and other clouds, allows customers to define policies for cloud infrastructure utilization ranging from tag strategies and data governance rules, to cost optimization and security automation. DivvyCloud works with the cloud API layer of GCP to present a unified, real­time view of cloud infrastructure while enabling policy inspection and enforcement through automation.

DivvyCloud software platform is built from the ground up to support and automate public and private cloud technologies, accelerating adoption of cloud strategies while maintaining visibility and control at scale.

Follow DivvyCloud on Twitter (@DivvyCloud) and LinkedIn, or visit the website at divvywebsite.staging.wpengine.com

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating policy compliance and optimization within cloud computing environments. DivvyCloud allows customers to design and build their own policies and automated remediations, whether designed to target cost, security or operational best practices. DivvyCloud was founded by seasoned technologists who understand first hand what is necessary to succeed in today’s dynamic, multi­cloud world.

Media Contact
Peter Scott
DivvyCloud, Chief Marketing Officer
peter@divvycloud.com

OpenStack East

DivvyCloud spoke and sponsored this event.

Dates: August 23, 2016
Location: New York City

With Revenue Expected to Reach $111 Billion This Year, Is Cloud the Inevitable Future?

 Investing in the cloud market has paid off. There was a lot of buzz in the tech industry last week around quarterly earnings announcements and the cloud. Among the most talked about was Microsoft, which reported that its Azure compute usage more than doubled year-over-year.

And, while AWS is still king, Microsoft and other cloud service providers are encroaching on their territory as enterprises continue to embrace hybrid cloud environments.

Here’s a quick look at some of the tech giants’ recent growth that is directly attributed to the cloud:

  • Microsoft’s Intelligent Cloud segment revenues grew by 7% year-over-year, while its main cloud product, Azure,        grew over 100% from last year. According to Microsoft, Azure revenues jumped by over $4 billion on an annual basis to $12.1 billion.
  • Amazon’s AWS reported that revenues grew by 64% in the first quarter of 2016.
  • IBM’s public, private and hybrid cloud services grew 30% during the last quarter.
  • SAP’s cloud subscriptions were up by 30% this quarter.
  • Adobe has seen massive growth in its Creative Cloud, Document Cloud, and Marketing Cloud services.

(See Here’s How the Cloud is Helping Tech Stocks Win for more details)

 This explosive growth in cloud revenue is expected to continue its rapid pace. In fact, according to a recent Gartner report, “more than $1 trillion in IT spending will be directly or indirectly affected by the shift to cloud during the next five years. This will make cloud computing one of the most disruptive forces of IT spending since the early days of the digital age. The aggregate amount of cloud shift in 2016 is estimated to reach $111 billion, increasing to $216 billion in 2020.”

Gartner Cloud

 

Ed Anderson, research vice president at Gartner said, “Cloud-first strategies are the foundation for staying relevant in a fast-paced world.”

 

And, the opportunity for growth is not just for cloud service providers.  There are many other companies in the cloud ecosystem – from security, management, and automation to storage and support – that all stand to benefit from increased cloud adoption.

 

As this growth continues, enterprises are navigating their way through the digital transformation, learning how to best leverage the cloud and take advantage of its many capabilities.

 

For more information on cloud growth and best practices for cloud implementation, check out these blogs:

So You Know The Cloud Solution Options, But How Do you Pick the Right One?

 

Digital Transformation and the Growth of Hybrid Cloud

 

You Know You Need To Be In The Cloud, So How Do You Get There?

 

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

HostingCon

DivvyCloud spoke at this event.

Dates: July 24-27, 2016
Location: New Orleans

Acquisitions illustrate importance of cloud security

Enterprises across virtually every industry and marketplace are waking up to the fact that the cloud is inevitable. The benefits of cloud adoption – whether it be the potential for cost savings, the increased scalability and flexibility, enabling self-service, rapid provisioning or the potential for greater mobility – are simply beginning to outweigh many of the challenges – perceived or otherwise – that were keeping companies from jumping in. 

Although the benefits are outweighing the concerns, that doesn’t mean the concerns are going away.
There were reasons why highly regulated industries and those that work with sensitive, private and top secret information – including financial services, federal government and healthcare – were reticent to embrace the cloud in the first place. That reason was a perceived lack of security. And – considering the current threat landscape and the sheer cost of a data breach – that concern is not going to be easily dismissed.

This concern about cloud security – coupled with the inevitability of the cloud – is creating a new market demand for cloud security solutions. This market demand was well explained by Steve Herrod, managing director at General Catalyst, in an article he penned for CNBC:

 Companies, irrespective of industry, are trending toward working and transacting almost exclusively online. They’re storing more data and more types of data in the cloud and often their only point of customer contact is via the internet. The exposure this has created — including the potential risk to brand and revenue that accompanies security failures — has elevated cybersecurity to being a board-level concern.

And there are even more indicators that cloud security is the hot market of the moment. The largest of which is the flurry of cloud security acquisitions that is currently making the headlines. Just in the past few months, large cloud, IT and cybersecurity companies have been snatching up smaller security firms with comprehensive cloud security offerings.

Cybersecurity giant, Symantec, recently closed a deal to acquire cybersecurity company, Blue Coat, in a move that was slated to add a host of Web and cloud security solutions to Symantec’s security portfolio. And they weren’t the only company dealing.

Earlier this month, IT superpower, Cisco, made their own cloud security acquisition, when they added CloudLock to the company in a deal worth an estimated $293 Million. According to Motley Fool, CloudLock’s solutions enable users to, “…to see and control what their users are doing in cloud services like Office365, Google Drive, and Salesforce,” which has led to significant success for the company, which, “…has built out a massive enterprise cloud security presence of 700 customers, some of which are Fortune 500 companies.”

And this need to secure cloud resources has the potential to drive even more market opportunity and corporate growth activity in the near future.

According to analyst firm, Gartner, cloud automation could be the next market heating up in the cloud space. The firm anticipates that the cloud automation market will grow significantly, with the technology’s market penetration in large enterprises rising from under 10 percent to 25 percent by 2020.

There are multiple reasons for this anticipated growth, but one of the largest reasons is the same one driving the growth of the cloud security marketplace – a fear about losing control and not being able to secure cloud resources. Cloud automation solutions are an instrumental tool in this fight, since they have the ability to enable enterprises to battle shadow IT and effortlessly enforce compliance.

The cloud is no longer an option for enterprises – regardless of their industry or the data that they store and share. This means security is essential – especially the security of cloud resources and data stored in the cloud. This requirement is creating massive market opportunities for companies with effective cloud security offerings, and will also create demand for cloud automation solutions that will enable enterprises to keep their cloud resources in check, in compliance and in the right hands.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

CIO Summit

DivvyCloud spoke and sponsoredthis event.

Dates: June 15, 2016
Location: Cincinnati

So You Know the Cloud Solution Options, But How Do You Pick the Right One?

You’ve got your CMPs, brokerage services, expense managers, and policy and governance automators. But what does it all mean? Is that what you really need? Sure, solutions providers are great at listing tons of attributes that make migrating to the cloud seem easy and manageable with their product, but if you aren’t sure exactly why or how to use them, these promises just join the rest of the noise in the market space.

In a recent InfoWorld article by Brandon Butler, he lists and describes the many options that are available for cloud management software. And while his list is comprehensive, it is helpful to examine why each of these options would be the best choice for your specific virtual infrastructure needs. Let’s take a deeper dive into the key capabilities you might need your cloud provider to solve.

Total Visibility and Integration

In a multi- or hybrid cloud infrastructure, gaining visibility across all clouds on one platform is ideal when data and processes are operating with different cloud providers. As Butler mentioned, CMPs, or cloud management providers, offer various tools that allow integration and visibility between clouds. An IT department may want to see how many instances are spun up and where, organize and track resources and detect change across the infrastructure. The ideal CMP will present that in a single pane of glass, displaying the entire infrastructure in one place instead of across multiple screens and platforms.

Many cloud vendors do provide their own tools and solutions — a choice for enterprise that may only be working in one cloud. But there is a downside. Vendor lock-in can occur when a company chooses to use all of a vendor’s solutions and tools. This means when a company is ready to move on to another cloud vendor or solution, they find that the transition is too hard as the vendor’s tools and capabilities are specifically designed for that vendor. Enterprises will find the hassle unattractive and remain locked into that vendor’s offerings. Using alternative CMPs allows for more ease of use across clouds and tools.

Policy & Automation

Once you’re able to see your entire infrastructure, you will want to take action on those observations. For example, if you start to see that you have rogue resources popping up in another country you’ll probably want to prevent future instances from spinning up by setting a policy that prevents resources from operating in that region. You may also want an additional policy that prevents certain employees or teams from spinning up certain instances. With CMPs you can set alerts that will let management know when this or another other non-compliant action takes places so they can solve the problem.

Some platforms can take that one step further. With a list of non-compliant tickets, an IT team can begin tackling the issues one at a time. But that creates several issues. For one, staff could begin to grow numb from the alerts, ignoring them when they come or moving slowly to respond. Another problem is once policies are set there could be many non-compliant resources and the list can simply grow too large to deal with in a timely manner, further increasing costs and security risks. With policy automation, bots can be taught how to detect these issues and take immediate action, including shutting down that instance. Automation can turn off instances and turn them back on, saving money at night when no one is using them, and even assign permissions to particular people so only approved staff can complete certain tasks. A solution that focuses specifically on active policy enforcement is ideal to make these processes simple and effective.

Scaling

Whether or not scaling automatically in the cloud can happen is debatable, but the ability to respond quickly to sudden spikes in demand can be the difference between a wonderful or terrible customer experience. Scaling is the cloud’s ability to quickly shift resources when a workload is overwhelmed and has reach capacity. The human response to these changes could be too slow for the demand and an automated process is needed to adjust to the demand and increase or even decrease compute power as needed.

Security

Security breaches are the greatest deterrent from the cloud, and with reports stating that breaches will increase in the following year, enterprises are looking for solutions that guard their companies’ data from hackers.

Additionally, security breaches can also occur from non-compliances within the company. When developers find ways to go around central IT and spin up what they need for a task, they also may leave the company vulnerable with unintentional gaps in security. Policy enforcement is a major key in ensuring these vulnerabilities don’t happen, and if they do, they are shut down automatically and immediately.

Cost Control

One of the draws to cloud computing are the overall cost reductions. And while it is possible to reduce costs, it is also very easy to incur them. Cloud services brokers tend to help customers reduce costs upfront, helping them to determine which cloud vendors and solutions will be best for their overall needs. The front end cost savings are attractive, but it is the operational costs that take most tech teams by surprise and rack up the greatest costs in the long run.

Costs are hidden throughout the entire cloud, and even though the cost to run an operation in the cloud may only be fractions of a penny, if an organization is unaware that a process is operating non-stop for a year, they can see a bill increase in the tens of thousands of dollars.

Organizations need tools that will provide cost transparency and display exactly where the costs are coming from. But it is also beneficial to deploy and automate policies that will shut down instances, prevent shadow IT and ultimately keep an eye on the cloud to ensure unexpected costs are not incurred.

Making the Decision

With a maturing market and so many options, it can be hard to make a decision and even more difficult to figure out which solutions actually achieve the goals you have set for your cloud migration and operations. Enterprises should approach their shopping experience with a plan and with the knowledge of what they have, what they want to understand, and where they see their company and cloud in the future.

Enterprises should ask for opportunities to test drive software and platforms to see the work in action. A platform may answer one question or display certain data but create more work and costs in the long run.

Weighing all options and understanding their complete functionality will help enterprises make wiser decisions and have more a profitable future.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Hybrid Cloud Security Threats and Data Protection

With hybrid clouds on the rise, security continues to be a top concern as there are an increasing number of attack surfaces and multiple security systems that need to work together.

In a recent IT portal article, Atiq Rehamn identified 10 Hybrid Cloud Security Threats and How to Fix Them. Among some of threats mentioned are things such as lack of encryption, unprotected APIs and denial of service attacks. He cites poor network execution, security protocols, and management as key factors that can leave companies vulnerable. While there are many threats out there, how you manage and prevent them is key.

As more and more companies implement hybrid and multi-cloud solutions, they need to approach security from a more holistic view. Companies have been building policies for corporate IT standards for decades, but with data now distributed across multiple clouds, they have to ensure that the same set of rules are followed for every environment. And, these need to be continually reviewed and updated as their infrastructure evolves.

The good news is that most cloud security protocols can be automated. Organizational controls can be put in place across multiple cloud environments and can alert IT managers when changes occur across their diverse cloud infrastructure.  These controls can also understand key interdependencies and potential impact radius of a potential threat and automatically take action solving issues as they occur and shutting down the vulnerabilities.

Another factor that is not addressed in Rehamn’s article is the human factor. You can implement security solutions and put policies in place but if people don’t follow them, they open the organization up to a whole new set of vulnerabilities.  This is especially true when it comes to the mobile workforce that is increasingly accessing company data remotely. They may forget to properly log out of the  system leaving it vulnerable to outside threats.

That is why it is critical that, in addition to automating the technical controls, you also ensure that your employees are in compliance with company IT policies and that the IT department has its own set of controls in place. This can be monitored and managed automatically as well.

When it comes to security protocols, every cloud you add to the mix opens up another access point to your data. By automating processes to make compliance and policy enforcement easy, organizations can ensure that their data will be protected.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Auto-scaling: A Myth or a Possibility

In a recent article on CloudTech.com, Monica Brink tries to shatter the illusion that auto-scaling, or the automatic resizing of application resources depending on demand, is an unachievable feat. She is right. That is, if the leadership of an organization has the mindset and There’s no denying it. No matter how cloud solutions are advertised, there is no end-all solution to the myriad concerns in leveraging the cloud. Companies that wish to migrate to the cloud are at a disadvantage if they believe it is as simple as picking a cloud, a couple tools and voila! you’re in the cloud. If this is the mental approach to the cloud then it’s no wonder users are becoming disillusioned by its delivery.

Even the best tools require some initial training and programming. And more likely than not, the organization itself has to undergo a culture change as well as a reallocation of responsibilities and duties as processes become decentralized, self-service and automated.

Brink’s argument is that in order for a cloud to automatically scale to demand, the platform would have to have the intelligence to understand various causes for volume changes and take appropriate action with no human involvement. She states that auto-scaling is a far more involved process than vendors would have you believe. Again she is right, if you follow that mentality.

Successfully auto-scaling and even fully utilizing the cloud is indeed very involved. The initial steps to choose a public cloud (or a few), equip it with the correct tools and then ensure its proper usage, is a demanding job (which also debunks the myth that the cloud will take jobs away). Auto-scaling can happen, but it requires the IT staff to “teach” systems how to understand when it is time to scale up, shut down or any other necessary function. This can be difficult and quite complex in an enterprise (large or small) with diverse requirements across different teams. Or it can be relatively easy, depending on how well equipped the cloud is with the tools that can react to change intelligently.

With an active policy platform, an IT department can create a strategy defining what processes demand auto-scaling, implement those policies and then from that point forward auto-scaling should take place with relative ease.

Moving to the cloud doesn’t mean you have a self-driving vehicle. The cloud exists to provide agile, on-demand compute capacity with the seamless movement of data at scale. And that takes a little work on our part. But that doesn’t mean it can’t happen. In fact, it is achievable and working for the most innovate companies. This takes time, a solid strategy, and a dedicated team with the vision for what the cloud can do in the long run. It is important for organizations to understand this before undergoing a major infrastructure shift.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Are large financial services companies finally feeling secure in the cloud?

When we think about the financial services industry, terms like cutting edge and innovative don’t usually come to mind. Although these companies are always looking for ways to better service their customers and improve their customer experience, they can also be somewhat risk adverse when it comes to implementing new technologies – and for very good reason.

The financial services industry is one of the top targets for malicious actors. And it’s easy to see why. In fact, it’s the same reason why banks are popular targets for robberies – they hold and manage the wealth of a arge population of people. Some of the largest banks manage trillions (yes trillions…with a “t”) in assets. And malicious actors are often out for financial gain.[/vc_column_text][vc_column_text]It’s a simple equation – a malicious actor is looking for money, so they attack the companies and organizations that hold and manage the money.

As an industry, being a constant target for cyberattacks can both wear on you, and influence your behavior. Financial services companies know that they’re facing a difficult and constantly evolving threat landscape, so they approach new technologies that could potentially increase their exposure and open them up to new security challenges with much trepidation and caution.

And this is why utilizing the cloud for network infrastructure has been all but a pipedream for financial services companies in the past. There were too many red flags and cautionary tales (whether tall tales or not) about the cloud being less secure than traditional, physical datacenters to take a chance.But, according to a publication that knows the financial services industry pretty well – the Wall Street Journal – that all could be about to change. According to a recent article, the WSJ claims that cloud adoption could increase in the next few years across the financial services industry. Per the article:

“Use of public cloud by big global banks is “very small,” but discussions with IT executives suggest adoption could grow significantly in 2017…Some bank IT executives told researchers they could go from zero use of an infrastructure-as-a-service model today to as much as 30% within three years.”

  1. The sirens call of cloud benefits – The potential for savings utilizing a “pay for what you use” cloud model for network infrastructure is becoming increasingly alluring for banks and other financial services companies, who are invariably watching as enterprises in other market segments save significant dollars by hosting applications and other workloads in the cloud. And the decreased speed to provision, self-service and other benefits of cloud adoption are simply becoming too enticing to ignore.
  2. Increased availability of security tools – The large cloud providers, including Amazon’s Amazon Web Services (AWS) and Microsoft’s Azure, are beginning to offer more tools to their users. Included in these new tool sets are an increased number of security tools designed to help the enterprise keep their cloud assets – and the data stored in them – secure. Unfortunately, these tools tend to be vendor proprietary, which means they simply won’t work across disparate clouds from different cloud vendors. In today’s hybrid cloud and multi-cloud world, this becomes a problem. Which brings us to our final reason…
  3. The emergence of cloud automation and cloud management – Gartner recently projected that the market for cloud automation and cloud management technologies (which Gartner calls dynamic optimization technology) to more than double in the next decade. Much of this growth is a result of large enterprises – like those in the financial services industry – looking to improve the management, transparency and security of their multi-cloud and hybrid cloud environments.These cloud automation and cloud management solutions are cloud vendor agnostic, and can work across multiple clouds. And they help assuage cloud security field by employing bot armies that effectively ensure that any cloud resource that is spun up meets with a set of security and compliance rules. This makes it possible to allow self-service across the enterprise and across multiple cloud vendors without opening the door to shadow IT and serious security side-effects.

It’s not surprising that the IT decision makers at large financial services companies – including the Deutsch Bank executive quoted in the WSJ article – are looking more longingly at the cloud. The benefits and cost savings are there, and the public cloud is more secure than people originally claimed. When you add in the emergence of security tools from the cloud vendor themselves, and the third-party, vendor agnostic cloud management and cloud automation solutions gaining traction in the market, the door is being kicked open for wider, more secure cloud usage across the entire financial services industry.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

How GE Uses Bots to manage a multi-vendor cloud

In our previous article on Cloud Sprawl, we featured the first part of a Q&A with GE Executive Thomas Martin, the Application Transformation leader for the corporation and part of GE Digital.

During the first part of our discussion, we covered the company’s current transition to the cloud, the benefits that it’s delivering and some exciting case studies within the company. These examples illustrate the power of the cloud to help improve business processes and cut costs.

In the second part of our conversation, we asked Thomas about the company’s approach to hybrid cloud adoption, what challenges this creates, and the company’s use of bots to enable better transparency, management and automation across their clouds.

Here is what he had to say:

Thomas MartinCS: You’ve said previously that GE is looking to embrace a hybrid approach to the cloud by utilizing services from multiple cloud providers. Why is this a necessary step for GE today? Does this create any challenges?

TM: There are multiple factors, but the first and largest factor is that each of the different providers brings some nuance. Each has aspects to them that are differentiators in and of themselves.

For example, for a lot of the codebase that can be run in Lambda, we are now starting to run serverless applications. We see Microsoft as having an advantage in the .NET space to do similar things as they mature out their offerings.

For our pure Infrastructure as a Service (IaaS) deployments, we want the ability to fluidly move resources, enabling the ability to work across providers. One of the things that we’re experimenting with as we ramp up in Azure is having applications with a hybrid architecture with some of the assets in AWS and some in Azure.

We’re also working with Oracle’s Cloud Platform. There we are looking to run our ERPs – Enterprise Resource Planning systems – in the cloud. Foundational systems like ERPs, when coupled with predictive analytics drive our Digital Thread.  As systems of record, these ERPs are part of the larger ecosystem of applications within the enterprise.

One of the challenges that we’re going to face is that as we deploy an ERP into Oracle’s Cloud Platform, is managing that overall ecosystem.  We need to ensure that our other applications hosted in AWS and Azure, all tie together, and can continue to operate. We have to ensure a cohesive experience across the providers.

One of the ways we’re going to be able to do that – and manage it – is through Event Driven Automation, the GE Bot Army. That’s the only way we’re going to be able to tackle this level of cross cloud complexity. I don’t think it’s feasible without them.

CS: What are these Bots? And how do they work?

TM: We’ve evolved our concept of the Bot Army quite a bit. Originally, Bots were single purpose, simplistic scripts that automated a specific function in a specific cloud.  Today, we have bots running around the network to enforce policy.

Not only do they identify non-compliance (regarding cost, security or best practices), they also take automated action to bring our cloud resource back into compliance.  We’ve employed a cloud-agnostic platform from DivvyCloud to build Bots in a unified way that enables us to enable consistent policy automation across different cloud deployments (AWS, Azure and internal VMware for example).

As these Bots take action, it is essential that we have the right user groups and security in place to prevent unintentional impact, and malicious threats.

CS: What are the benefits you’ve experienced from the deployment of these Bots?

TM: Compliance is a big one that speaks for itself. Another big one is the optimization of capacity by making sure servers are sized correctly and data is stored only as long as needed.

Being able to sense variations globally and proactively diagnosing issues is essential so that we can automatically shift the network or load without the end-user ever noticing.

CS: GE’s cloud initiatives seem to be paying dividends, and the bots that you’re implementing appear to be effective in helping to manage your hybrid cloud environment. What’s next for the company as it relates to the cloud?

TM: This is just the tip of a big iceberg. We are driving a better user experience and productivity is a big piece of it. As we continue to innovate in the area of Industrial Internet of Things (IoT) and build the factory of the future, there will be continuous integration and deployment of cloud applications.  We will continue working on innovative ways to optimize our infrastructure accordingly.

For additional information about GE’s cloud implementation and their use of bots to better manage and automate their cloud infrastructure, watch this video featuring Thomas Martin.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

GE cuts costs and increases productivity with the public cloud

As a large, global enterprise with multiple business units, General Electric (GE) is at the forefront of the digital transformation that’s taking place across all industries. Cloud computing is enabling enterprises like GE to ease this transition and allow the company to optimize its infrastructure at the same time.

We talked to GE Executive Thomas Martin, the Application Transformation Leader for the corporation and part of GE Digital, about the company’s current transition to the cloud. Here’s what he had to say:

Ryan Schradin (RS): Why is GE moving to the cloud? What drivers are making the cloud essential for the company now and into the future?

Thomas Martin (TM): By developing software operations in each of our core business lines we are on track to generate $15B of our revenue from software by 2020, making us one of the top ten software companies in the world.

Many of the analytical models that drive Customer outcomes rely on the physics-based engineering designs and product related data that comes from within our enterprise IT systems. To optimize access to this information we are rapidly working through decades of application portfolio bloat and complexity. At the start of this journey we had over 9,000 applications, 300 plus ERPs running the business, and countless physical datacenters.

To move forward, we needed to simultaneously simplify the application portfolio by eliminating and consolidating many legacy systems, while moving away from bespoke, stand-alone applications. The resulting integrated ecosystem enables the data to remain connected across our Digital Thread of business processes.

This transformation has also required us to rethink the focus of what differentiates us in the market place. We are not going to help grow our business units because of our ability to rack and provision physical infrastructure. And that’s why we’re utilizing cloud service providers for our basic compute, storage and networking infrastructure. The self-service “as-software” capacity offerings provide a whole new level of experience for our developers, so we’re partnering with cloud providers whose entire business is to enable those capabilities.

RS: How has the move to the cloud benefited GE? Are there examples of cost savings or improved efficiency as a result of cloud implementations?

TM: As of today, we’ve moved more than 2,000 workloads into the public cloud, and have eliminated over 2,500 applications as part of our transformation efforts. Conservatively we are seeing an average cost reduction of 45% over our traditional in-house hosting solutions.

We continue to decrease the amount of applications we’re running in our datacenters with a goal to move a total of 9,000 workloads to the public cloud and reduce the number of datacenters that we have to 4. These remaining internal centers will only contain our most sensitive data, with everything else is going into the cloud.

One of the biggest impacts has been cultural. We’ve moved more responsibility, access, and capabilities to the applications teams for control of grassroots infrastructure. We are also moving things closer to the development team making sure all the security controls are in place. This has resulted in application deployments going from days to minutes with a significant reduction in manual touch points.

RS: Are there any case studies within GE that illustrate the power of the cloud?

TM: GE’s Oil & Gas business has been a fast adopter of cloud and to date has migrated over 300 applications. One of these applications is a configurator that the sales team uses at customer sites to sell products.

We used to spend $62,000 annually to run this application in our physical datacenter and bring in about $600,000 in orders. Changes to the application took approximately 20 days to complete through traditional release cycles. After transforming the application to a modern architectural design, and implementing continuous integration/development for code deployment, this application now costs $6,000 to run in the cloud and code update are pushed in two minutes. We also improved performance, availability, and mobility of the application helping drive additional sales.

RS: You’ve said previously that GE is looking to embrace a hybrid approach to the cloud by utilizing services from multiple cloud providers. Why is this a necessary step for GE today? Does this create any challenges?

TM: There are multiple factors, but the first and largest factor is that each of the different providers brings some nuance. Each has aspects to them that are differentiators in and of themselves.

For example, for a lot of the codebase that can be run in Lambda, we are now starting to run serverless applications. We see Microsoft as having an advantage in the .NET space to do similar things as they mature out their offerings.

For our pure Infrastructure as a Service (IaaS) deployments, we want the ability to fluidly move resources, enabling the ability to work across providers. One of the things that we’re experimenting with as we ramp up in Azure is having applications with a hybrid architecture with some of the assets in AWS and some in Azure.

We’re also working with Oracle’s Cloud Platform. There we are looking to run our ERPs – Enterprise Resource Planning systems – in the cloud. Foundational systems like ERPs, when coupled with predictive analytics drive our Digital Thread.  As systems of record, these ERPs are part of the larger ecosystem of applications within the enterprise.

One of the challenges that we’re going to face is that as we deploy an ERP into Oracle’s Cloud Platform, is managing that overall ecosystem.  We need to ensure that our other applications hosted in AWS and Azure, all tie together, and can continue to operate. We have to ensure a cohesive experience across the providers.

One of the ways we’re going to be able to do that – and manage it – is through Event Driven Automation, the GE Bot Army. That’s the only way we’re going to be able to tackle this level of cross cloud complexity. I don’t think it’s feasible without them.

RS: What are these Bots? And how do they work?

TM: We’ve evolved our concept of the Bot Army quite a bit. Originally, Bots were single purpose, simplistic scripts that automated a specific function in a specific cloud.  Today, we have bots running around the network to enforce policy.

Not only do they identify non-compliance (regarding cost, security or best practices), they also take automated action to bring our cloud resource back into compliance.  We’ve employed a cloud-agnostic platform from DivvyCloud to build Bots in a unified way that enables us to enable consistent policy automation across different cloud deployments (AWS, Azure and internal VMware for example).

As these Bots take action, it is essential that we have the right user groups and security in place to prevent unintentional impact, and malicious threats.

RS: What are the benefits you’ve experienced from the deployment of these Bots?

TM: Compliance is a big one that speaks for itself. Another big one is the optimization of capacity by making sure servers are sized correctly and data is stored only as long as needed.

Being able to sense variations globally and proactively diagnosing issues is essential so that we can automatically shift the network or load without the end-user ever noticing.

RS: GE’s cloud initiatives seem to be paying dividends, and the bots that you’re implementing appear to be effective in helping to manage your hybrid cloud environment. What’s next for the company as it relates to the cloud?

TM: This is just the tip of a big iceberg. We are driving a better user experience and productivity is a big piece of it. As we continue to innovate in the area of Industrial Internet of Things (IoT) and build the factory of the future, there will be continuous integration and deployment of cloud applications.  We will continue working on innovative ways to optimize our infrastructure accordingly.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

You Can’t Just ‘Build a Wall’ When it Comes to Multi-Cloud Security

Back in the old days (before cloud computing) most companies built firewalls around their networks to protect themselves from outside intruders and data breaches. This was the most common approach to network security and was one of the best lines of defense.

Ah, the good old days! We’ve all seen the tremendous evolution of the cloud from software-as-a-service (SaaS) to low-cost, outsourced IT Infrastructure-as-a-Service (IaaS) to what is now widely referred to as enterprise cloud computing.

Now that cloud computing has become a mainstream practice and companies are implementing hybrid and multi-cloud solutions, they have to up the ante on their security practices. This includes a departure in the traditional way they’ve approached security. They can no longer just build a wall around their network. They have to protect their infrastructure from a variety of access points.

A great example of how this can be done is the recent partnership between Australia’s incumbent telecommunications provider Telstra, and data center security company vArmor. As Telstra moves to the cloud, it is building up its security arsenal to protect its virtualized and cloud assets.

“This is really important to our strategy, really important to Telstra in terms of us adding value as a cloud provider, and adding value as a multi-cloud and hybrid cloud provider. Not just providing cloud services, but the network and the security layers are really fundamentally important to us.”

As cloud computing has evolved, so has cloud security. It is still a top priority for the IT department, especially in highly regulated industries that have been the target of some very high profile security breaches.  And, as the mobile workforce expands, more employees are accessing data from multiple locations, making the challenge even greater.

As VMware CTO Chris Wolf states in this Virtualization Review article: “….it ultimately is the job of IT to protect, secure and ensure availability of applications and content residing in datacenters we don’t own, traversing networks we don’t own, and accessed by devices we don’t own. This is the new world in which IT operates.”

So how can IT managers ensure that the highest cloud security standards are in place while still focusing on all of the other network and infrastructure priorities?

The answer lies among a new generation cloud automation technologies and tools. These solutions can help companies secure their data by simplifying the management of hybrid clouds and automating processes that make compliance and policy enforcement easy.

They can alert IT managers when changes occur across their diverse cloud infrastructure; understand key interdependencies and potential impact radius; and automatically take action solving issues as they occur and closing vulnerabilities.

So, while IT managers can’t go back to the good old days when managing network security was not as complex, they can rest assured that all of their public and private clouds are being monitored 24/7 and that they will be the first to know if anything is amiss.

For more on this topic, check out these blogs:

Securing the Cloud: How Adoption Impacts the Enterprise

How to Combat Security Risks in the Cloud

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Cloud provider partner programs spell profits

Partners are often the key to a cloud provider’s success. There are many reasons for this, but the most significant is the fact that cloud providers may be selling infrastructure as a service, but they’re not SERVICE PROVIDERS in the true sense of the word.

Cloud vendors don’t come to an office and help enterprises implement their cloud initiatives. They don’t help enterprises draft cloud strategies and migration plans. And they only offer a select set of tools and applications with which to manage, automate and analyze a company’s cloud resources.

Enterprises need help in getting their cloud initiatives off the ground. They need service providers to help them execute their cloud strategies. And they need technology companies coding and creating new software solutions that increase the transparency, automation, security and manageability of their cloud platforms.

DivvyCloud, the creator and provider of advanced cloud automation and cloud management solutions, counts itself among the technology partners of multiple cloud providers. According to Jeremy Snyder, the VP Business Development at DivvyCloud:

“Cloud technologies are transformational, and IT teams have to wrap their heads around a new way to deliver cloud benefits to their organizations. It takes multiple vendors and solutions to make the move effective. Tech providers build the solutions that make cloud adoption work for compliance, regulated environments and industries with high security requirements. Some technology providers are instrumental in enabling customers to embrace the cloud by delivering the advanced capabilities that make it possible.”

And the cloud providers realize this. At least they’re starting to.

We’re beginning to see more cloud providers implementing comprehensive partner programs. These programs reward service providers and technology vendors that bring them customers and drive their revenue. They effectively deliver incentives to the companies that deliver them new accounts and facilitate increased cloud usage at existing ones. And for technology companies and service providers, these incentives can deliver increased revenues and new customers in themselves.

One of the most well-known partner programs is offered by the largest and most well-known cloud provider – Amazon Web Services (AWS). The AWS partner ecosystem, or APN (Amazon Partner Network) is gargantuan, which makes sense since the company currently boasts the largest market share of all cloud providers.en’t just enticing the large cloud providers. The necessity to have a well-established and run partner program is even driving smaller cloud providers to explore the creation and implementation of their own partner programs.

And the program is relatively straight-forward. All partners are divided into two camps – service providers and technology partners. There are multiple levels – or tiers – within each category, each of which requires meeting a certain set of milestones, and comes with varying levels of incentives and benefits.

“The AWS partner ecosystem is massive, but within the technology provider category it’s relatively streamlined,” said Jeremy. “There are three tiers for technology providers – registered, standard and advanced – that correspond with the level of integration that your solution has with AWS, and the amount of AWS revenue it influences. DivvyCloud was recently named an ‘Advanced Technology Partner,’ which represents the highest tier.”

[/vc_column_text][vc_column_text]Although the milestones needed to qualify for the upper tiers may be difficult to attain, there are significant benefits that await the technology partners that reach them. Included in the benefits AWS provides for top tier partners are listings in AWS partner directories, access to marketing funds and opportunities, invitations to AWS Summits and events, and exposure to a huge list of AWS customers. Those benefits are combined with the validation that comes with being a top partner of the industry’s largest cloud provider.

But AWS is just one of many different cloud providers that have strong partner programs. The potential to drive additional revenue and deliver additional capabilities to end users has many cloud providers looking to ramp up, expand or implement programs to recognize and reward their service provider and technology partners. Unfortunately, they’re not all as successful as the AWS partner program.

One great example is Microsoft’s Azure partner program.

The overall benefits of the Azure partner program are similar to those offered by the AWS program, including co-marketing funds, events and other incentives. Unfortunately, the company has failed to make the user experience as simple and streamlined as their largest competitor in the cloud space.

“Azure is an incredible cloud and Microsoft is working tirelessly to grow its user base,” said Jeremy. “Unfortunately, the company added Azure’s partner program into the existing Microsoft Partner Network, which has historically been difficult for partners to navigate and prioritizes the use of multiple Microsoft products at the same level as driving cloud adoption.”

Incorporating Azure’s partner community into the existing Microsoft Partner Network may have been the simpler option – cheaper and easier than building out a new community – but the complexity and focus on selling other Microsoft products hurts the experience and opportunity available to technology partners. Many of the innovative, small start-ups that are creating the most exciting technologies for the cloud simply don’t care about whether their clients are implementing SharePoint; they only care about the cloud and Azure.

Then there’s Google. We’ve recently discussed Google’s movement toward building out their enterprise solutions. Recent hires and comments from Google’s leadership indicate that the company is interested in and focused on competing with AWS for the top spot in the enterprise cloud space.

Unfortunately, the partner program reflects the new focus on the enterprise cloud market. Despite being easy to navigate and apply for, the partner program ultimately feels like an afterthought for Google, with many technology partners feeling like less than a priority.

“The Google partner program application was very easy to apply for – the user experience was simple and straightforward, like we’ve come to expect from Google,” says Jeremy. “But it’s unclear what’s supposed to happen next…”

But the benefits of partner programs aren’t just enticing the large cloud providers. The necessity to have a well-established and run partner program is even driving smaller cloud providers to explore the creation and implementation of their own partner programs.

When asked for an example of smaller cloud providers that are doing it right, Jeremy reference DigitalOcean, which he claims, “Is much more hands-on and interactive with their technology and service provider partners.”

Regardless of the size of the cloud provider, it’s clear that partners are rapidly becoming the lynchpin for continued growth and success. Although AWS and its partner program currently hold the crown, other cloud providers are rushing to get into the game, and to start utilizing their partner ecosystems to drive revenue and win market share.

Although the company may still be figuring out an approach for its partner program and identifying which incentive and benefits it can offer, it’s working hard to lay the groundwork for an exciting program. The company is conducting due diligence with service providers and prospective technology partners to ensure that they’re a good fit and appears to be aiming for quality over quantity with their partners.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

DivvyCloud Becomes Amazon Web Services Advanced Technology Partner

horizontal_color_200

DivvyCloud Becomes Amazon Web Services Advanced Technology Partner

Scaling enterprise cloud adoption through compliance automation and infrastructure optimization leads to partnership recognition

Arlington, Virginia (June 15, 2016) DivvyCloud, a leading developer of innovative technology to automate and manage today’s cloud infrastructure, announced that it has achieved the highest level of technology partner status with Amazon Web Services (AWS), the leading platform for public cloud infrastructure-as-a-service. The achievement comes after two years of full AWS support within the DivvyCloud product.

“We’ve been developing for the AWS platform since the company’s inception, and every one of our customers from General Electric to small start-ups use AWS in some capacity,” said DivvyCloud CEO Brian Johnson. “As we see customers adopt AWS more broadly, they continue to face challenges managing across multiple accounts, and especially implementing mechanisms to automate policy compliance and optimization.”

DivvyCloud’s platform integration to AWS, and other leading cloud technologies, offers customers the ability to build automated compliance and corrective action rules for their cloud deployments using DivvyCloud’s Bot Factory. Existing Bots cover a range of security and cost management use cases, and include:

  • Schedule Instances Bot – Auto schedule instance downtime to reduce costs
  • Regional Audit Bot – Eliminate resources in unapproved cloud regions
  • S3 Bucket Permissions Bot – ensure appropriate permissions on S3 buckets
  • Network Audit Bot – identify and fix saturation levels of networks and subnets

Bot Factory allows customers to design and configure their own policy automation in just a few clicks, including data integration with other monitoring, orchestration and internal policy systems.

“It was time for us to engage more closely with the cloud platform partners,” said DivvyCloud VP of Business Development Jeremy Snyder. “Very few ISVs ever achieve the APN Advanced Technology Partner status. It’s a great achievement for us, and we will continue to work closely with AWS to bring additional value to our customers.”

Follow DivvyCloud on Twitter (@DivvyCloud), the blog www.cloudsprawl.net, or visit the website at www.divvywebsite.staging.wpengine.com. To see DivvyCloud’s AWS Partner Network (APN) listing, please visit http://www.aws-partner-directory.com/PartnerDirectory/PartnerDetail?Name=DivvyCloud

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating the lifecycle management of IT resources across leading public/private cloud technologies. DivvyCloud was founded by seasoned technologists who understand firsthand what is necessary to succeed in today’s dynamic, multi-cloud world. DivvyCloud delivers multi-cloud resource visibility, provisioning, automation, monitoring and migration capabilities to improve security, reduce costs, and maintain operational agility.

Media Contact
Peter Scott
DivvyCloud, Chief Operating Officer
peter@divvycloud.com

Combating Shadow IT and Bringing Cloud Computing Out of the Shadows

Cloud technologies allow enterprises to deploy applications and virtualize compute, storage and networking capacity at scale with relative ease. This easy provisioning often leads to Shadow IT, the unknown use of IT resources in the cloud and a danger for organizations that implement solutions to identify and reduce the impact of rogue cloud resources.

DivvyCloud, a developer of innovative software to automate and manage today’s cloud infrastructure, recently authored a new white paper entitled, “Bringing Cloud Computing Out of the Shadows: Shine the light on Shadow IT with active policy enforcement and cloud automation,” which highlights the major concerns and risks enterprises face with Shadow IT.Featuring the latest research, this report discusses data breaches and additional risks of Shadow IT such as cost, security and compliance. The paper also outlines best practices and solutions for preventing Shadow IT across public and private cloud deployments.

To learn more about shadow IT and its impact on your enterprise, click HERE.

Embracing Machine Learning – You’ve Dipped Your Toes in the Cloud, Now Dive into Data

For enterprises that were first to adopt the cloud, it is now becoming commonplace. They aren’t figuring out the cloud anymore. They’ve moved in and have already been seeing the benefits of making the shift. These organizations have implemented the cloud, partnered with the right vendors, begun storing data, running apps and they are ready for the next big step.

Enterprises realize that the massive loads of data they are storing can provide real insights into their consumer base, which can be used to better serve them. The problem is it would take a lot humans and time to quickly and efficiently understand all this information. Machine learning, or the use of algorithms to automate pattern recognition, allows software to interpret the data enterprises have collected to understand what the information means, and even what actions to take. With the information flowing through the cloud, organization can predict the needs of consumers, tailor campaigns, adjust product offers, and so on. With machine learning, the cloud be used to create more value for companies and their customers.

Take a look at these articles that discuss this topic in depth:

Machine Learning is Redefining the Enterprise in 2016 by Louis Columbus

Louis Columbus talks machine learning in various industries in a recent Forbes article. Columbus explains that with machine learning, years of data can be turned into a competitive advantage and drive revenue with marketing promotions, campaigns and rebates across channels. He also shares how different industries are using machine learning to advance their businesses.

Machine learning is being adopted by enterprises more and more with the ability to collect, evaluate and respond to a wide range of data. The algorithms that are used can self-correct and re-learn appropriate responses and actions. Columbus goes on to describe the change in data used through cloud computing, Internet of Things (IoT) and other computing abilities that are now demanding the use of machine learning to evaluate the information.  

Google says welcome to the Cloud 2.0 by Sharon Gaudon

Amazon Web Services might be the leading cloud provider, but Google plans to take over in the data analytics space. In her ComputerWorld article, Sharon Gaudon discusses senior vice president for Google’s cloud business Diane Greene’s position on the company taking data and analytics from the cloud and providing insights on what can make businesses even better. This is what she calls Cloud 2.0.

Greene states, “It’s just a given now that you have a more cost-effective and reliable way of computing,” Greene told Computerworld during Google I/O last week. “The 2.0 of the cloud is the data and understanding the data. Now that you’re in the cloud, how do you take advantage of it so your business can operate at a whole new level.”

Machine Learning for Your Cloud Too

DivvyCloud uses machine learning in the form of Cloud Automation Bots. As an active policy enforcement platform, DivvyCloud works on-premise to monitor and self-heal the cloud infrastructure. For enterprises that are spinning up hundreds, thousands, or even hundreds of thousands of instances (and related virtual resources), ensuring compliance, security and cost savings is too big of a job for any human or team of humans to manage. These Bots automate responses to compliance issues and immediately remediate discrepancies. By teaching the Bots what violations to look for and how to resolve them, enterprises can guarantee the operation of their cloud is not leading to vulnerabilities that could result in breaches, unknown expenses or sprawling shadow IT.

Together, machine learning and a sound cloud infrastructure can help businesses take a more targeted and accurate approach to their operations. The questions is no longer “should we go to the cloud.”  The new question is “what can we learn from the data in our cloud(s) about our business and our customers.”

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Cloud strengths/weaknesses driving hybrid cloud adoption

When organizations were first analyzing cloud providers and making a jump into the cloud, it was all about narrowing down the field and choosing that single cloud provider that best met their needs. When that vendor was identified, all of the organization’s workloads and data would then be transitioned into that cloud.

When “hybrid cloud” was discussed, it was often in reference to having a combination of public and private clouds.

But this is no longer the case, or the accepted practice in the industry. Organizations and enterprises are no longer choosing just one provider and moving forward with them exclusively. And hybrid clouds are no longer just a mix of public and private clouds, but rather a combination of private clouds and multiple public clouds from disparate vendors.

These hybrid cloud environments are rapidly becoming the norm, with enterprises identifying multiple clouds and vendors for their workloads, data and applications. And recent studies help to shed some light on why.

Investment bank, Cowen & Co., recently conducted a survey of 314 public cloud customers. Their research identified which cloud providers are grabbing the largest market share (AWS was the clear leader, to the surprise of no one), and also assigned scores to the disparate cloud providers for a handful of different characteristics.

Ultimately, their research identified different strengths and weaknesses for each individual cloud vendor – and AWS had some significant weaknesses, including security and quality of support. These strengths and weaknesses are illustrated in the following tables from Cowen & Co.:

Cowen tables

These tables illustrate that not all clouds are made the same. In addition to each having their own strengths and weaknesses, they also have different features and capabilities. This means that one single cloud provider is no longer the best route for the enterprise. It’s better and more effective to identify the best cloud provider for the workload, and utilize multiple clouds.

However, this multiple, hybrid cloud approach creates other challenges. Multiple clouds from different cloud providers mean multiple management frameworks and increased complexity in managing cloud environments. It can create challenges when moving data, workloads and applications from one cloud to another. It can also negatively impact security, if not all cloud servers are updated, patched and held to the same security requirements and protocols.

This is why cloud management and cloud automation adoption is most likely to grow in the near future – much like analyst firm, Gartner, predicted. As enterprises rush to embrace a hybrid cloud model, they will soon find themselves experiencing the pains of increased complexity and decreased transparency – which can include security challenges and out-of-control cloud costs.

Cloud automation and cloud management solutions can help companies eliminate these challenges by simplifying the management of hybrid clouds and automating processes that make compliance and policy enforcement easy.

For additional information on the rise of hybrid clouds and the challenges that come with a hybrid cloud environment, click here to download our eBook, “The Cloud Conundrum.”

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Let’s Make a Deal: Salesforce.com and Amazon Get Cozy in the Cloud

In a deal expected to be worth $400 million over four years, last week’s news that Salesforce.com selected Amazon Web Services (AWS) to be its preferred public cloud infrastructure provider was a big win for both companies.  Salesforce.com benefits from having access to global data centers that will allow it to bring new infrastructure online more quickly and efficiently in select international markets.  Meanwhile Amazon, in addition to the expected revenue influx (cha-ching!), gets the endorsement of the grandaddy of Software-as-a-Service providers.

According to this The Street article, Fred Moran, director of research at Burke & Quick Partners LLC, said “Amazon is on a roll when it comes to servicing the market for cloud computing. The partnership with Salesforce.com is another confirmation that AWS is dominating the cloud computing market with a broad platform of products and services.”

The enterprise market will continue to see increased adoption of the cloud due to the increasingly mobile workforce and requirements for faster speed-to-market.  According to a report from IDC,  worldwide spending on public cloud services will grow at a 19.4% compound annual growth rate (CAGR) — almost six times the rate of overall IT spending growth – from nearly $70 billion in 2015 to more than $141 billion in 2019.

Story Behind the Story:

But let’s peel back the onion a bit and take a closer look at this Salesforce.com/Amazon deal and what it means for the future of cloud computing. While the public cloud is experiencing explosive growth, private cloud is not dead.  Salesforce.com is not abandoning its huge investment in existing data centers.  What this recent move demonstrates is the continued trend towards a hybrid cloud infrastructure consisting of both public and private clouds.

While Salesforce.com will certainly continue investing in its own data centers to support its services, the company said it will turn to AWS to get services online more quickly and efficiently in select international markets.

Despite Amazon’s huge success and continued upwards path towards cloud dominance, some companies are pulling out of Amazon and bringing their infrastructure back in-house. Earlier this year, we wrote about Dropbox – the online file hosting service –announcing it was moving much of its storage away from AWS and into its own private cloud infrastructure. Dropbox now stores 90% of its users’ data in-house.

Dropbox still uses AWS to keep files in specific geographic regions where they don’t have their own infrastructure. This is particularly important in countries where government rules and regulations around “data sovereignty” require customer sensitive data to be kept within their own country (or European Union) borders.

It appears that Salesforce.com is following in Dropbox’s playbook in hybrid cloud deployment. There is no doubt that AWS offers the greatest geographical reach when it comes to cloud computing, which makes global expansion easier for companies from both from a cost and speed to market perspective.

Salesforce.com was a pioneer of the SaaS model having built their infrastructure PaaS for business processes, Salesforce.com, CRM/apps and a host other offerings. If there is anyone who should be comfortable building their own data center it should be Salesforce.com.

However, Salesforce.com is investing in public cloud providers like AWS to test new markets before it invests heavily in its own global infrastructure.  The question is does this represent a potential move by SF.com to shift more of its portfolio to AWS and reduce its data center footprint?  Or is this deal all about speed and reduced risk to explore international expansion opportunities with minimal upfront investment in infrastructure.

The real proof will be five years from now once Salesforce.com has reached critical mass and needs to scale up in these regions. At that point, it may be more cost-effective for them to develop their own data centers in those regions.  Or will they continue to scale with AWS?

For more perspective on the move to hybrid cloud and the challenges of cloud adoption and management, check out these resources:

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Analyzing the compliance challenges created by the cloud

Compliance. We know the feelings that this may evoke – fear, anxiety or even sheer dread.At the most basic level, compliance is about following the rules. But rules come in many shapes and sizes, particularly when it comes to the idea of compliance.

For instance, companies may have external rule sets, such as:

  • PCI – primarily for financial services
  • HIPAA – primarily for healthcare data
  • FedRamp – primarily for US government and companies that services the public sector
  • SOX – Sarbanes-Oxley – primarily for publicly traded companies

These are often referred to as compliance regimes.

And digging deeper, let’s look at what a policy regime can contain. Here, we’ll use the PCI standards, specifically the prioritized approach for PCI DSS from March 2015. Here is the approach that’s recommended for companies to follow:

  1. Remove sensitive authentication data and limit data authentication.
  2. Protect systems and networks, and be prepared to respond to a system breach.
  3. Secure payment card applications.
  4. Monitor and control access to your systems.
  5. Protect stored cardholder data.
  6. Finalize remaining compliance efforts, and ensure all controls are in place.

What can we observe from this, is that PCI compliance is a combination of technical controls, data policies, and operational process. But again, technical controls can vary widely, from detailed specifications like:

  • Firewall configuration to protect cardholder data
  • Not using vendor defaults for system passwords
  • Encrypt transmission of cardholder data across public networks

To very softly defined, subjective controls like:

  • Protect stored cardholder data

An interview with a CEO of cloud managed service provider focused on HIPAA compliance revealed that his firm’s approach to compliance was centered around “building a common set of controls, but more importantly, a standard process for each type of action, and then a full audit trail for all actions taken.”

The cloud presents unique challenges

Some of the main drivers for companies moving to the cloud include rapid deployment, decentralized IT, and elastic provisioning where infrastructure can scale both up and down.

Each of these values is a problem in itself.

  • Rapid deployment means decreased time to verify that the appropriate security controls are in place.
  • Decentralization means decreased visibility from staff who specialize in compliance.
  • Elastic capacity means needing to monitor and control your environments constantly for new vulnerabilities or compliance issues.

And to top it off, cloud vendors typically explicitly place the burden for most of the difficult controls (network, security, data policy) compliance on the customer through the shared responsibility model.

Amazon Web Services and the shared responsibility model. Image Credit: Amazon Web Services. Amazon Web Services and the shared responsibility model. Image Credit: Amazon Web Services.

That’s no reason to abandon the cloud completely. Most leading cloud vendors will go out of their way to help enable you, from providing helpful guides to compliance reports for their services.

But remember the shared responsibility model – that only covers physical infrastructure and the virtualization layers. The application layer, and – most importantly – the actual customer data, are explicitly excluded!

Overcoming cloud compliance problems with policy automation

Most companies working in regulated spaces with compliance requirements employ specific people or have specific job roles responsible for ensuring compliance. That team can lead the organization’s efforts for compliance in the cloud. Policy automation tools can help you overcome all these challenges, and more.

Controls

While each application or data type may have slightly different set of rules, building these controls centrally, and then pushing them out as an overlay to both existing and new cloud infrastructure can ensure that all infrastructure stays within the guidelines that the organization requires.

Process

What happens when there are problems? Having the team consider each control, each risk of policy violation, and the type of response needed is great. Having that documented and shared widely among all the people who might need to follow the documented response process is even better. Having the documented response process turned into an automated script that automatically corrects the problem is the best.

Documentation

Many organizations implement centralized logging and IT service management (ITSM) ticketing systems to gather the necessary documentation to provide to auditors, and to alert them to areas of concern. However, in the rush of a security incident or outage, this documentation often has to be created and reviewed after the fact. This is not only inefficient, but also subject to human error. Modern policy automation tools can generate the needed documentation automatically and systematically.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

VIDEO: Puppet on automating infrastructure with no strings attached

Earlier this month, members of the Washington, D.C. and Arlington Infrastructure as Code Meetup Group came together for their fourth meetup about utilizing code to influence, define, establish and automate an organization’s IT infrastructure.

Previous meetups have covered such interesting technologies as Kubernetes and Docker. This month’s event included a presentation from Robert Maury, a Field Solutions Engineer for DoD and Civilian government at automation software company Puppet.

Puppet’s solutions are well known across the devops and IT communities for their ability to enable enterprises to automatically deliver, secure and operate their infrastructure. During his presentation, Robert illustrated the power of Puppet and provided a demonstration of the solution in action.

Here is the video from Robert’s appearance:

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

As Google gets serious about cloud, enterprises see more choice on the horizon

Last November, Alphabet (formerly Google) announced that they were acquiring a small cloud startup called Bebop Technologies, Inc, which had created a platform optimized for the creation of “powerful and easy-to-use enterprise applications.”

Following the acquisition, Alphabet created a new position for the company’s founder – former VMWare founder, Diane Greene – to run the company’s enterprise cloud business, which included Google for Work, Cloud Platform and Google Apps.

In addition to acquiring Bebop and its platform, the company was bringing on an important new executive. The addition of Diane was considered a coup and turning point for Alphabet. The company – which has mostly operated in the consumer space – was bringing a veteran of the enterprise IT space, accustomed to selling to the world’s largest enterprises, into the fold.

This renewed and intensified attention on enterprise sales was highlighted again this week by remarks that Diane made at the Google I/O, the company’s developer conference. According to an article in MarketWatch, Diane Greene spoke to the press following the event’s keynote speech and made some very provocative statements regarding the company and its cloud offerings.

During the press briefing, Diane was quoted as saying that the company’s cloud offerings were, “…quite enterprise ready.” She also said that her early discussions with senior IT decision makers have revealed that enterprises, “…want to have two cloud vendors.”

[How Succesful can Google’s Cloud Be? As many of our readers already know, the cloud market is currently dominated by Amazon Web Services (AWS), which continues to control the lion’s share of the market despite impressive gains and growth from other large IT giants – such as Microsoft’s Azure and IBM’s Softlayer – and smaller, more niche cloud providers – such as DigitalOcean. Diane’s statement that enterprises are hungry for a second cloud vendor shows that Alphabet is looking to aggressively compete for that enterprise business with AWS.

Can the company successfully compete with AWS? We’ll pretty much have to wait and see.

Synergy Research Cloud GraphAzure, Softlayer, DigitalOcean and other cloud providers have managed to win market share in the crowded cloud space, but have yet to come close to dethroning AWS. Amazon’s cloud behemoth accounted for 31 percent of the global cloud market in 2015, according to Synergy Research Group. That number is more than three times the closest competitor – Microsoft – at just 9 percent of the market.

But Alphabet/Google is no stranger to competition, and has flourished in the face of it.

In 2008, the company released their Android mobile operating system to compete against the Apple iOS, which runs on the company’s line of iPhone mobile devices. Despite the popularity of Apple’s nearly ubiquitous devices, numbers released this week show that Android device maker, Samsung, continues to dominate the market (23.2 percent), and research by Gartner shows that the Android operating system controls more than 80 percent of the market, in total.

However, other Alphabet projects and offerings designed to unseat incumbent market leaders have not been so successful. We’re looking at you, Google+.

That being said, only time will tell if Alphabet will be successful in wrestling market share away from AWS. But they may not even need to. There are a large number of enterprises that have yet to move to the public cloud. In fact, when we spoke to Forrester’s Dave Bartoletti, he said, “Today, the percentage of enterprises in North America that have adopted public cloud for data center infrastructure is about 30 percent.” And other enterprises are beginning to diversify their cloud portfolios and embrace a hybrid cloud environment comprised of multiple disparate clouds.

With new companies entering the cloud and early adopters embracing multiple clouds, Google’s reinvigorated cloud offering may be able to pick up a wide ecosystem of new customers without necessarily taking any business away from the incumbent cloud providers.

[What does this mean for enterprises?

Ultimately, Alphabet’s reinvigorated focus on enterprise cloud offerings is only going to benefit companies. Competition is healthy and beneficial for everyone in a marketplace, and it’s important that enterprises have a choice in cloud providers. Also, disparate cloud offerings have been found to be good at disparate things, adding a new cloud provider to the mix could provide enterprises with an option that works best for their particular needs and workloads.

However, there are some challenges that arise from embracing multiple clouds. Companies that are utilizing hybrid cloud environments may find it difficult to manage and secure these disparate clouds since they each come with their own management tools and applications. This makes it difficult to roll changes, updates, etc. out across multiple clouds. This can also make it difficult to ensure that the same security settings are enabled across all clouds.

Then there’s the issue of moving workloads, data and applications across or between clouds. Utilizing one cloud provider has historically locked enterprises in since it’s difficult to move information and applications between them. If an enterprise should find Alphabet’s cloud offerings more to their liking than their existing cloud provider, making a change could be difficult at best.

However, third-party, vendor-agnostic cloud automation and cloud management solutions could be the answer to these challenges. These solutions identify all clouds being used across an enterprise and aggregate their management onto a single pane of glass. This makes it much easier to roll updates or changes out across multiple clouds, ensure that all security settings are consistent and even makes it easier to move workloads between cloud providers.

What’s even cooler is the ability these solutions enable to automate multiple cloud operations. Should unneeded cloud servers be rolled up, or a cloud server be left active long after it’s needed, these solutions can ramp them back down, saving time and money. Should an instance be spun up that doesn’t meet all company governance and policy, it can be spun down automatically. This ensure that even in the most complicated hybrid cloud environments, the company is never wasting money or leaving itself vulnerable.

Alphabet seems poised to throw down the gauntlet and enter the fray against the cloud market leader, AWS. Although this could be a positive for enterprise users, challenges could await, as well. And a cloud automation and cloud management solution could be necessary to ensure that cloud resources stay affordable and safe.

FinTech’s leading the charge in financial cloud computing. Is the banking industry ready to compete?

Like most industries, the financial sector is rethinking its digital and computing strategies to keep up with the trends and demands of its customer base. And like most industries, the financial sector is concerned with the risks of operating in the cloud. But the financial sector needs to continue to release its control and fears to optimize growing opportunities in the marketplace.

FinTechs, or financial technology startups, are taking the industry by storm providing customers with the kind financial products and services they are seeking. These companies are finding fast and flexible solutions to daily banking needs. Services range from peer-to-peer payment such as Venmo, digital wallets like Paypal and mobile money accounts. Leading this charge is the millennial generation, a group of consumers that is demographically large, has significant spending power and is accustomed to using technology in their daily lives.

According to a recent report by Oracle, this millennial generation is mainly concerned with having a mobile-first experience, one that is convenient, fast and safe and not limited to physical branch interactions. Banks have known for some time this is the direction the market was headed, and many have incorporated mobile apps and features for their customers. These market-driven needs are the reason why banks are partnering with FinTechs to enhance their product offerings. This is also why some FinTechs are even trying to compete with banks to snag market share with solutions better aligned with customer needs.

Roadblocks for Cloud Adoption: Risks and Regulations

Making a complete digital transformation has certainly been a challenge for banks. That’s not surprising. The amount of regulations surrounding the banking industry are such that most firms are very cautions when it comes to the penalties associated with putting customer information at risk. The regulations that are set by financial governing bodies are notoriously confusing and do not take into account the new, dynamic options that exist with cloud or digital infrastructure. These regulations and penalties are deterrents and a major drag on innovation in the cloud by financial institutions.

But these regulations do serve a purpose. The protection of bank customers is a top priority, and protecting personal and financial information is absolutely necessary. CIO Dive reported that nearly one in five survey respondents had suffered a data breach in the last year. The threat of cyberattacks targeting customer data can make cloud and digital innovation seem like a greater risk than it’s worth.

Banks Need to Stop Focusing on Risk Aversion and Focus on Problem Solving

Where FinTechs are winning the battle in the market is their ability to identify and solve problems quickly and on point with consumer demands. Their specific focus on financial services allows them to meet the needs of consumers more efficiently than banking institutions that may not have digital customer engagement and online strategies as  core competencies.

Some larger banking institutions are getting in the game, taking advantage of the agility and flexibility with cloud computing and digital banking. World Bank and Capital One are examples of this change in a recent article by CIO magazine. The World Bank now uses cloud computing to generate data that helps the US Government fight poverty, ultimate providing a better and more targeted service the citizens in need.

Capital One has been at the forefront of a digital-first strategy and been a leader in the industry for digital banking and cloud solutions. In order to compete with emerging FinTechs and easily accessible public apps, Capital One began experimenting with cloud computing in AWS.  It also build a digital innovation lab that could create its own digital infrastructure to support the development of products and services. While there has been skepticism that banking institutions will not move into the cloud, Capital One has been shrinking their physical data centers and shifting some of their most critical workloads to AWS.

This transition has helped Capital One respond to customer needs more efficiently because developers can create solutions that reduce cost to the company, while also meeting high demand needs of customers, especially during peak spending seasons like Black Friday. The ability to provision infrastructure and deploy digital products quickly and on-demand is key for its productivity and speed to market.

For financial institutions, staying ahead of a dynamic customer base is key to driving success. With a big population of millennial dominating the digital market, it would be prudent for banking institutions to look at all options and leverage cloud computing to meet their compute infrastructure needs.  Seeking partnerships with FinTechs or staying competitive with them will help banks stay relevant in an ever-changing marketplace.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

The Role of Policy Violations in Cloud Data Breaches

Cloud adoption has been growing year over year and is predicted to accelerate in 2016. Yet security remains one of the main areas of concern for most CIOs. Enterprise IT departments are dealing with infrastructure that is increasingly decentralized. One of the main benefits of the cloud is the ability for companies to delegate cloud access and allow self-service provisioning of infrastructure. Yet central IT, which is typically responsible for ensuring data security, can’t assume that everyone within the organization is aware of data policy or able to follow policy properly.

 

Cloud is not inherently more or less secure than any other platform, but the distributed nature of cloud provisioning makes it perhaps more challenging. But as with all security considerations, the ultimate factors to consider are:

 

  1. Limiting the exposure to potential breaches (attack surface)
  2. If breached, limiting the size and scope of data accessed (blast radius)

 

So what’s the track record for cloud security? CRN compiled a list of the top 10 data breaches in the cloud in the first half of 2015. Let’s look at two of the most notable data leaks in more detail:

 

  • Two BlueCross BlueShield organizations (CareFirst breach and Premera), with a combined 12 million patients’ data affected
  • Army National Guard, with nearly one million soldier service records exposed

 

Much as airplane crashes should be considered a combination of factors (see Malcolm Gladwell’s work on this topic as an example), most data breaches result as a failure of multiple controls or systems. Those control systems normally include security mechanisms, such as firewalls and encryption, as well as organizational policies, such as user/group access controls and infrastructure policies that must be followed. In each of these occasions, however, at least one policy violation contributed to the breach itself, or to the scale of the breach.

 

Blue Cross Blue Shield

 

BlueCross BlueShield (BCBS) is a nationwide group of 36 independent health insurance member companies, covering nearly 105 million Americans. Health care data is so crucial that it is subject to its own regulatory regime, HIPAA. HIPAA requires that PII (personally identifiable information) be stored in very protected ways.

 

However, both Premera and CareFirst had inconsistent data encryption across customer data. This meant that, while passwords were encrypted, other PII, such as names and addresses, were not. That data became public, and BCBS was forced to provide identity theft protection for nearly 12 million people.

 

A policy that both required and verified encryption across all customer data might not have prevented the breach itself, but it could have decreased the blast radius, saved millions in costs to BCBS and severely limited the data exposed.

 

Army National Gaurd

 

Most organizations, whether public or private, have their own sets of vendor relationships, negotiated with service levels, security controls and more. These contracts are typically negotiated in line with the organization’s policies and requirements. So the fact that the Army National Guard’s data breach stemmed from data being stored in an entirely unapproved data center, speaks directly to the lack of policy control within the organization.

Nearly one million soldiers’ identities were now available to hackers, along with service records. Despite the betrayal to public servants, this puts soldiers’ names, addresses and more potentially into the hands of those who may wish them harm.

 

 

In this case, policy violation was the direct cause of the problem. Having a policy to automatically reduce or eliminate unprotected attack surfaces would have been a strong deterrent against this type of breach.

Policy- Part of a Comprehensive Solution

Organizations should take a holistic approach towards protecting their data in the cloud. This protection should combine both minimization of attack surface as well as reduction of blast radius. Here a few ways to do this:

 

  1. Make sure you choose a cloud provider whose offerings align with your needs. For instance, if you require HIPAA, make sure that getting a BAA (Business Associate Agreement) with the cloud provider will be possible.
  2. Use best practices from the provider’s toolkit. Cloud providers offer varying tools, from firewalls to encryption, and guidelines for applying those tools in effective ways to meet different needs. Do some research and validate your security design both internally and externally.
  3. Educate your users. One of the greatest values of the cloud is the ability to enable various parts of the organization to get IT in a self-service model. Teams can move faster and be more productive than ever before. However, IT cannot expect all parts of the organization to understand organizational policies or follow them.
  4. Complete your security suite with an active policy enforcement tool. Implementing policies that ensure data security and compliance with existing standards, best practices and other organizational guidelines can guarantee that the benefits of the cloud are not lost because of potential data vulnerabilities that could be accidentally introduced.
  5. As the proliferation of the cloud continues, active policy enforcement can help organizations defend themselves against possible data breaches in the cloud. Policy can be used to minimize both attack surface and blast radius and bring greater peace of mind for enterprise IT departments.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

DivvyCON1, May 12th 

DivvyCloud is excited to announce our first customer event! DivvyCon1

Thanks for joining us for this half-day session where we will shared customer experiences, DivvyCloud’s future roadmap, and insights and feedback from all our guests during the open discussions.

8:00 – Gather and breakfast
8:30 – Welcome & Introductions
9:00 – Customers Presentations – clouds strategies and how we are using DivvyCloud
11:00 – DivvyCloud Product Roadmap and Demo
12:00 – Open Discussion and Lunch
1:30 – Adjourn

The Power of Bots: Coming to a Cloud Near You

Bots are coming to the cloud. You may have heard of bots — software applications that run automated tasks over the Internet and basically automate human processes with lightning speed.

Bots have historically been used for more malicious purposes such as coordinated and automated attacks on networked computers that lead to a denial of service, or spam bots that send spam messages to thousands of people with the click of a mouse.

If you’ve ever tried to buy concert tickets online only to find them sold out within the first few minutes, chances are bots were involved in grabbing a lion’s share of them to be resold elsewhere for a higher price.

While some bots still present a threat to the Internet, they can also be used for good purposes too. Companies like Microsoft and Facebook are leading the way for this new era of bots. Earlier this year, Microsoft introduced its Bot Framework that allows developers to create a bot that they and their customers can use to chat.

Last month, Facebook launched its Messenger platform with ChatBots that allow businesses to deliver automated customer support, e-commerce guidance, content and an interactive experience. It also announced a slew of chatbot partnerships with developers who got early access to the chatbots, like 1-800-Flowers.

Cloud Bots

A recent Economist article touches on how bots are becoming the new apps and states that “many in the technology industry hope that Facebook’s event will mark the beginning of another fast-growing, multi-billion-dollar software economy.”

Cloud Bots are also growing in popularity in the cloud computing market. As more and more enterprises move their applications to the cloud, these bots are enabling them to manage their cloud infrastructure more efficiently and cost effectively.

For example, General Electric Co. (GE) has deployed an army of bots throughout its cloud infrastructure that can quickly determine if its cloud apps meet security and compliance standards.  This Wall Street Journal article provides a good explanation of how GE’s bot army works: “When GE developers deploy to its cloud systems an application containing vulnerabilities or unauthorized data, a Reaper Bot is there to halt or destroy the app before it begins running.”

The ability to automate bots makes it easy for customers to configure automation for security, cost and policy. Divvy Cloud launched its BotFactory today which provides a framework and interface for creating special purpose cloud automation “bots” that can autonomously ensure compliance and policy across an organization’s public and private cloud infrastructure. Customers can then surface these components in the BotFactory to share across business units, or with the broader technical community.

As more enterprises expand their cloud footprint, automated bots will take on a bigger role in ensuring compliance in the dynamic world of cloud infrastructure. And while bots continue to be used for these types of purposes, let’s hope in the future the good bots continue to outweigh the bad ones.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Gartner IT Operations Strategies & Solutions Summit

GE Presenting Bot Army Strategy and DivvyCloud at Gartner Summit this week in National Harbor (watch for new GE video in coming weeks)

DivvyCloud customer and GE CIO Thomas Martin will present at Gartner’s Infrastructure and Operations Summit at DC’s National Harbor, May 10-12.  Thomas will discuss GE’s strategy for managing cloud infrastructure at massive scale, and how DivvyCloud provides the platform for development and implementation of automation Bots.  DivvyCloud will be hosting customers and industry leaders at our hospitality suite May 11th for informal discussions and networking.  Let us know if you’d like an invitation to join.

DivvyCloud Releases BotFactory and GitHub Repository for Automation Bots

horizontal_color_200

DivvyCloud Releases BotFactory and GitHub Repository for Automation Bots to Enforce Cloud Policy Compliance

BotFactory makes it easy for customers to configure automation for security, cost and policy compliance across leading public and private cloud infrastructure

Arlington, Virginia (May 10, 2016) DivvyCloud, a developer of innovative software to automate and manage today’s cloud infrastructure, released BotFactory this week. Built on-top of DivvyCloud’s extensible platform, BotFactory provides a framework and interface for creating special purpose cloud automation “Bots” that can autonomously ensure compliance across an organization’s cloud infrastructure. Bots are comprised of interchangeable components that can be customized to fit an organization’s specific cloud policy requirements.

In addition to a growing number of standard DivvyCloud Bots, customers and partners can leverage the DivvyCloud SDK (software development kit) to develop their own automation for specific use cases. Customers can surface these components in BotFactory to share across business units, or with the broader DivvyCloud technical community.

DivvyCloud also released an initial repository on GitHub for customers and developers to leverage and share cloud automation Bots. Existing Bots cover a range of security and cost management use cases, and include:

  • Schedule Instances Bot – Auto schedule instance downtime to reduce costs
  • Regional Audit Bot – Eliminate resources in unapproved cloud regions
  • S3 Bucket Permissions Bot – ensure appropriate permissions on S3 buckets
  • Network Audit Bot – identify and fix saturation levels of networks and subnets

“We are investing in our platform to accelerate the creation and adoption of cross-cloud policy automation to address growing security, cost and compliance risks faced by enterprises scaling their cloud footprint,” said Brian Johnson, DivvyCloud CEO. “With the Bot Factory and our GitHub repository in place, our community can develop hundreds of Bots to address a broad range of cloud compliance, integration and automation use cases.”

Follow DivvyCloud on Twitter (@DivvyCloud), the blog www.cloudsprawl.net, or visit the website at www.divvywebsite.staging.wpengine.com

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating the lifecycle management of IT resources across leading public/private cloud technologies. DivvyCloud was founded by seasoned technologists who understand firsthand what is necessary to succeed in today’s dynamic, multi-cloud world. DivvyCloud delivers multi-cloud resource visibility, provisioning, automation, monitoring and migration capabilities to improve security, reduce costs, and maintain operational agility.

Media Contact
Peter Scott
DivvyCloud, Chief Operating Officer
peter@divvycloud.com

The Complexities of Managing AWS at Scale

Public clouds such as Amazon Web Services (AWS) offer the ease and flexibility of increasing and eliminating resource capacity as needed. This is a vast improvement from on-premises data centers that are usually either under- or over-capacity, costing enterprises valuable opportunities and capital.

But even with the simplicity of the cloud, there are still a great number of challenges with managing AWS at scale. Capacity aside, managing workloads and resources is a daunting task that still takes numerous employees and budget. But worst of all, not managing resources properly can lead to security gaps, exposing the enterprise’s data to the wrong employees, customers or attack by nefarious parties. Let’s explore these challenges.

Losing visibility: Enterprises aren’t just spinning up a few resources to the cloud. There are massive amounts of data and processes that operate within the cloud infrastructure. Enterprises are finding new ways to advance their business model by using the cloud, but it can be easy to lose track of what resources exist…and where!  After a while the list of resources grows so long it is nearly impossible to keep track of them all, resulting in stale resources, excess capacity and potentially unauthorized instances in the cloud.

Patching: As an enterprise makes updates, changes and improvements to its software, patching is used to apply those changes across the application. The challenge with patching at scale in a cloud environment is ensuring the process is applied across all resources in all locations around the world. There could be tons and tons of resources that may or may not be affected by patching changes. AWS is incapable of communicating across resources, which means patching has to be applied for each resource, running the risk that some could be missed, leading to portions of the application not updating properly.

Monitoring: AWS has some built-in features that can help enterprises monitor their infrastructure. The challenge is configuring and tuning those tools for the specific needs of the organization so that they are applied across the environment. Notifications can be scheduled to alert teams of important changes, but the frequency of these messages causes employees to get numb to them and they may miss important downtime messages.

Cost Tracking / Budget Forecasting: Bills can grow, and grow, and grow as the company is expanding. Gartner Research found that public cloud bills are 2-3x customers’ expectations.

Another element of cost is new offerings that come from cloud service providers. Sometimes developers or staff want to use these features to just test them quickly and not tell anyone because the cost is cheap…for now. If any of those trial instances is forgotten it can create unexpected costs that may not be identified by management. The enterprise will find itself paying for capacity it did not approve of or has forgotten about.

Security: AWS does have tools in its platform to keep the cloud secure and protect against data breaches. Being hacked through AWS isn’t as much of an issue as spinning up resources that have not had proper policies applied, and that responsibility rests on the enterprise. Shadow IT and cloud resources provisioned by employees finding new and innovative ways to advance the infrastructure can lead to some of the greatest risks. Without knowing the enterprise’s standard practices and stance on policy enforcement, employees can unintentionally create gaps in the cloud that leave the enterprise vulnerable.

Provisioning and Deployment: Self-service, on-demand provisioning of enterprise cloud infrastructure can quickly grow out of control reaching the thousands of resources across multiple accounts and regions. With that many new events taking place there is a chance that some of those resources could violate policies that are in place for the enterprise. Resources can be spun up in the wrong geographic location, with the wrong permissions, at the wrong capacity and more. Applying policies to each individual event manually can be challenging and will ultimately lead to errors.

Educating Employees: Let’s face it, human error is an unavoidable risk that exists in any human enterprise. User mistakes can be the reason for a security breach, or the reason a resource is spun up, forgotten and costing the enterprise money. But fearing human error is an inefficient practice. Enterprises do not want to be in the business of babysitting their technical and business staff. In order to use employees’ full talent and abilities means providing access to qualified personnel and allowing them to exploit and work within the cloud. Nevertheless, enterprises want to limit the occurrences of employees creating their own resources that do not align with enterprise policies.

Event Driven Automation is Key

For each of these cases, the ability to set automated policy enforcement is necessary to experience the full benefits of the cloud. Event driven automation gives organizations the ability to set policies that can be applied across some or all resources triggering notification or immediate action to solve the issue.

Making sure that these changes are timely is also a major consideration. The goal is consistency, or the point at which infrastructure has policy consistently applied and deployed against existing and future resources. The question the company needs to ask is: what is that window of time that is acceptable for eventual consistency? Perhaps spinning up a resource across the world is an event that doesn’t require immediate attention from a company, but opening a firewall to the world in a production environment is something that needs to be addressed almost instantaneously. The enterprise would then need to write the policy that identifies that timing and then apply that policy to a cloud management solution that will solve the problem in the correct amount of time.

Choosing to migrate to the cloud has a number of challenges that can all be met with the appropriate vendor partnerships. AWS can be leveraged to advance enterprise goals and reduce costs, but doing so manually borders the impossible. A third party cloud automation platform can allow the application of active policies so enterprises can focus on their business and not on the implementation of the cloud.

For more information check out these blogs:

Gartner Report Shows IT Catch 22 and the Need for Cloud Automation

As Cloud Security Market Grows, Automation Will Play a Key Role

 

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Infrastructure as Code, Arlington/DC Meetup #3

DivvyCloud will be attending Infrastructure as Code MeetUp #3.  Hand’s on demonstration of how Puppet automates cloud infrastructure.  Robert Maury (Puppet Labs) Consulting automation engineer with a wide breadth of both operational and architectural experience on premises and in the cloud.

Next Evolution of Cloud Computing: Compliance, Storage and Security Products Dominate AWS Summit

Enterprises are increasingly using the cloud to store data, run business applications and ensure there is enough capacity to run everything smoothly. Amazon is still the 800-pound gorilla when it comes to cloud computing services with its Amazon Web Services (AWS) division.  Not only has the company gained massive adoption, it is also leading the way for future trends and growth in the cloud market.

New announcements from the company at the AWS Summit last month demonstrate that it has evolved to the point of now focusing on more niche areas and augmented services for cloud offerings.

At the AWS Summit one primary focus was on the areas of security, storage and compliance. These were the hot topics of many of the product announcements well as the discussions and exhibits at the show. Here’s a summary of all of the news announced the AWS Summit: New Services and Features Announced.  Let’s take a few of these and how they will impact the cloud:

Security

As cloud adoption has increased, more sensitive data is being stored in the cloud where security must be a top priority. Among the companies exhibiting at the AWS Summit were several cloud security vendors with products aimed at the network layer for encryption and keeping the cloud environment secure.

Amazon announced the general availability of its Amazon Inspector which aims to “improve the security and compliance of applications deployed on Amazon EC2 by using Amazon Inspector to automatically assess applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity.”

There are other, independent  cloud automation solutions that can alert CISOs when changes occur across their diverse cloud infrastructure; understand key interdependencies and potential impact radius; and automatically take action solving issues as they occur and closing vulnerabilities. These same cloud optimization technologies can detect inefficiencies and potential waste, and take automated action to drive savings.

 

Compliance

Compliance is becoming a big area of focus when it comes to cloud computing especially for companies in highly-regulated industries such as financial services and health care.

Amazon Inspector streamlines the process of validating and demonstrating that security and compliance standards are being followed throughout the development process.

However, in speaking with many attendees, concerns about AWS Inspector emerged – namely that Inspector is AWS-exclusive. The truth is that many enterprises, especially those in regulated industries, are in the early phases of cloud adoption. As such, they’re hesitant to adopt complex tools that will only help in compliance for a small part of their data storage.

Also, for compliance, policy, procedure and audit are at least as important as infrastructure controls. To that point, auditing changes to infrastructure are crucial.

For those companies that are large cloud customers already, scale becomes the problem. Automated compliance features can allow IT managers to monitor workloads on a regular basis and help them meet their stringent regulatory requirements.

Storage

Data storage has traditionally been contained in private data centers that are run and controlled by the CIO and IT department. But the cloud has changed that. With the rapid growth of data, many enterprises are moving their data to the cloud to save money and provide easier access to data.

Amazon’s new 80 TB AWS Import/Export Snowball appliance offers 60% more storage capacity with immediate availability in four additional regions and global availability by the end of 2016. It also introduced two new low-cost storage options for Amazon Elastic Block Store (EBS) designed for getting more data stored.

The economics, scale, and durability of cloud storage are key advantages for moving data to the cloud.

As adoption grows and cloud services become more of an integral part of an enterprise’s overall IT infrastructure, cloud automation will be essential to effectively and efficiently managing all of these services. I expect to see more adoption of cloud automation as the market evolves and bigger players like Amazon continue to extend their cloud offerings.

Did you attend the AWS Summit? If so, please share your impressions and take-aways.

For more perspective on cloud security and compliance, check out these recent blogs:

Securing the Cloud: How Cloud Adoption Impacts the Enterprise

How to Combat Security Risks in the Cloud

Concerns About Security and Compliance Slow Financial Services Adoption of Cloud Despite the Many Benefits that Await

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Gartner Report Shows IT Catch 22 and the Need for Cloud Automation

Earlier this year, the analyst firm, Gartner, released their Innovation Insight for “Dynamic Optimization Technology for Infrastructure Resources and Cloud Services,” that looks at the current state of enterprise IT infrastructure and the need for new technologies for automating, optimizing and managing software defined infrastructure. The report also paints an interesting picture of the challenges that IT departments are currently facing.

The analysis by the report’s authors – Donna Scott, Milind Govekar – effectively illustrates that IT departments are – quite frankly – stuck in a Catch 22 between the inefficiencies of traditional data center environments, and cloud implementations that are becoming increasingly difficult to manage, secure and afford.Worse, the cloud may not even be as efficient as many enterprises and IT leaders believe.

Is Cloud Really More Efficient?

IT departments are working to move an increasingly large number of applications and workloads from traditional data centers to the cloud for multiple reasons. First, the cloud offers faster and less expensive provisioning and implementation. Then, there’s the issue of flexibility, mobility and scalability – the cloud has those things baked in. Finally, there is the issue of efficiency.

IT departments traditionally – and smartly – built their IT infrastructures to handle peak demand. They may have even built them to handle more than that to help reduce downtime and to future-proof their infrastructure against growing demand. But, this over-provisioning wasn’t efficient, and IT departments found themselves paying for and building infrastructures that were only partially utilized.

Although numbers vary based on what research you’re citing or who you’re asking, I’d estimate that the average utilization of data centers is somewhere between 20 and 30 percent across enterprises – which equates to significant waste and money effectively flushed down the drain.

The cloud is supposed to fix this. The elasticity of cloud offerings should effectively keep IT departments from having to over-provision their infrastructures. Instead, their cloud implementations can be ramped up and down – when necessary – to handle new workloads, provide development and test environments, and handle peaks and valleys in traffic or network utilization.

But the Gartner report paints a different picture. Gartner’s research points to a situation that may even be less efficient. According to the report, “traditional data center issues of low infrastructure asset utilization and over-provisioning of capacity resources have spilled over into the private cloud environment. As reflected by Gartner client inquiries, many have asset utilization of just 10% to 20%, suggesting significant room for improvement in cost and usage…”

And, this problem is only exacerbated on the public cloud side where the pay-as-you-go commercial model leads to rampant over-provisioning. As per the report, companies are in need of ways to, “…enable better utilization of services at lower costs (for example, by changing instance types, rightsizing instances and turning off VMs that are not being used).”

If the report’s findings are accurate, the cloud may not be the efficiency-generating machine that many believe.

But Wait… There’s More…

The problems with cloud implementations don’t necessarily end with underutilization. There are other ways in which cloud implementations can be thieves, syphoning dollars and proprietary data out of unsuspecting enterprises. The most common of which is caused by decentralization of cloud provisioning decisions.

One of the allures of the cloud is its ability to enable easy on-demand, self-service provisioning of compute resources. Simply, it’s easy for anyone within an organization to pay for and spin up a cloud server or service with little to no help or oversight from their enterprise’s IT department.

This is a positive when it comes to speed of execution and agility, and can also allow Enterprise IT to focus less on provisioning resources and more on higher level tasks. But there’s also a dark side. The lack of visibility can mean that the company loses track of cloud resources (commonly referred to as “Shadow IT”), loses control of their cloud costs and is otherwise in the dark when it comes to their cloud resources and spending.

The Gartner report confirms this and provides metrics that show that the problem is only going to get worse. According to the report, “As more public and private cloud services are provisioned, there is a great potential for waste in terms of resources consumed and/or dollars spent to achieve both agility objectives and SLAs. This is compounded by decentralized IT spending, which is expected to reach 50 percent by 2020…”

What does this mean? Effectively, the cloud programs that many companies began to help drive cost savings are actually costing them more. How much more? According to the report, “Gartner client inquiries show that it is not uncommon for public cloud service bills to be two to three times higher than expectations.”

And this is what makes the decision to move to the cloud a Catch 22 for IT departments. If they continue to utilize their traditional IT infrastructures and physical data centers, they’ll most likely be overpaying and provisioning systems that wind up underutilized, while also losing the agility, flexibility and scalability of the cloud. However, if they begin to migrate applications and workloads to the cloud, they may see similarly-low, or lower utilization rates and higher than anticipated cloud costs.

[/vc_column_text][vc_column_text]But there are ways they can get everything – scalability, agility, flexibility, efficiency and cost savings.

The Case for Dynamic Optimization Technology

The Gartner report defines Dynamic Optimization Technology as, “a technology capability that uses telemetry, algorithms, service and resource analytics, and policies to drive automated actions that reduce waste, cost and risk exposure, while simultaneously improving service levels.”

Simply put, these new technologies – including today’s advanced cloud management and cloud automation solutions – enable enterprises to better manage their infrastructures – including their cloud resources – in a way that requires little oversight but greatly increases transparency and cost savings.

These new technologies aggregate all of an enterprise’s cloud resources – regardless of cloud provider – onto a single operational framework, or “pane of glass.” In this management pane, policies can be set and implemented across all clouds, changes can be made universally and workloads and data can be shifted seamlessly between cloud resources.

Gartner imageThis is just what enterprises need to escape the Catch 22.

Migrating to the cloud and implementing a cloud automation or dynamic optimization solution can help deliver all of the benefits they’re looking for from a cloud implementation while also delivering the tools they need to address concerns about underutilization and cost overruns.

These solutions give IT departments the ability to enable self-service and decentralized cloud provisioning because all provisioned cloud resources are aggregated and have policy applied consistently within one control plane. The aggregation of real-time cloud data and lifecycle controls enables easier tracking of cloud expenditures and better control over the utilization of cloud servers by enabling IT departments to move data and applications onto underutilized cloud resources and subsequently spin-down unneeded cloud servers.

But there’s another area in which these solutions can help enterprises – cybersecurity. By aggregating all clouds in one place and implementing universal security controls and system updates, every cloud server and resource can be protected from cyberattack by ensuring that vulnerabilities are immediately identified and remediated per standard policy. This is especially important in today’s constantly shifting and increasingly sophisticated threat environment.

[/vc_column_text][vc_column_text]According to the Gartner report, “Dynamic Optimization Technology enforces policies that reduce sprawl and wasted costs, increases policy compliance…reduces risks from accidental or malicious activities…[and]…enable[s] better utilization of services at lower costs (for example, by changing instance types, rightsizing instances and turning off VMs that are not being used).”

The Growth of the Dynamic Optimization Technology Market

The move to the cloud is imminent. Enterprises of all sizes are migrating their infrastructure, applications and data to the cloud. And many are implementing multiple, disparate clouds in a hybrid cloud environment – whether that be public and private clouds, or multiple public clouds from disparate vendors – as a way to increase resiliency and reap other benefits.

But as enterprises move to the cloud, they’re going to experience all of the pains that we discussed above – inefficiency, underutilization, security concerns and cost increases. This pain will drive many of them to explore cloud automation, cloud management and other Dynamic Optimization Technologies as a way to alleviate this pain and overcome these challenges.

According to Gartner, “IT leaders (including I&O, CSBs and IT finance) are realizing the need for Dynamic Optimization Technology because of the rapid expansion of cloud services, and the desire to contain costs or reduce risk.”

If these solutions were the best salve for the pains impacting enterprise cloud implementations, it would be fair to expect significant growth of the Dynamic Optimization Technology market as an increasing number of enterprises move to the cloud. Unsurprisingly, that’s exactly what Gartner is predicting.

Gartner currently estimates adoption of these technologies at less than ten percent of medium and large enterprises. But those numbers are predicted to explode by 2020. According to the report, “we see penetration of public cloud optimization growing significantly, with Dynamic Optimization Technology’s penetration in large enterprises rising to 25 percent by 2020.”

That number is even more generous for private cloud environments. As the report claims, “…by 2020, 35% of large enterprises will implement Dynamic Optimization Technology for private cloud and virtualized infrastructure, up from less than 5% in 2016.”

Enterprises are faced with a Catch 22 – having to choose between traditional IT infrastructures that were inefficient, expensive and underutilized, and cloud implementations that had the potential to create significant management and spending issues across the organization. But Dynamic Optimization Technology can break them out of that Catch 22. It can make the benefits of the cloud available to enterprises and help them eliminate many of the cloud’s shortcomings, without having to sacrifice the self-service and agility that are some of the most alluring aspects of cloud implementations.

The abilities of these technologies are not going unnoticed. As more enterprises migrate to the cloud, and experience the pain, cost overruns and management issues that can come with it, Dynamic Optimization Technology will become essential in their IT departments. And that is reflected in the impressive growth Gartner is projecting for the market.

For additional information about Dynamic Optimization Technology, its benefits and the market’s potential for growth, download the Gartner report by clicking HERE.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

DC Modern Enterprise Tech MeetUp

The DivvyCloud team will be attending the DC Modern Enterprise Tech MeetUp on April 28, at 5:30pm. The DC Tech MeetUp works to foster innovation in enterprise tech, build a community of learning, and promote the best and brightest within the space.

Collision Conference

The DivvyCloud team will be attending the Collision Conference in New Orleans from April 26-28. Collision is “America’s fastest growing tech conference” created by the team behind Web Summit

Digital Transformation and the Growth of Hybrid Cloud

Most enterprises know they need to embrace the digital age and transform their business processes accordingly; however the path to get them there isn’t always as clear. For many, a hybrid cloud approach is the key to accelerating this transition by allowing customers to embrace the speed and agility of cloud while still maintaining compliance around security, cost and performance.

There are plenty of recent studies in the industry that support this growing trend, but two in particular caught my attention – one from IDG Research and the other from IBM. These surveys demonstrate that not only is the hybrid cloud market growing, but also that businesses are realizing true benefits specific to their digital transformation initiatives.

According to the survey by IDG Research on behalf of EMC, IT managers and CIOs claim that hybrid cloud makes implementing digital business initiatives a faster, easier and less expensive process. John Gallant, senior vice president and chief content officer of IDG Research Services believes that “hybrid cloud is the great enabler of digital business.”

Some of key findings of the survey backup this claim including:

  • 83% said they either currently use a hybrid cloud or plan to do so in the future
  • 96% said their hybrid cloud initiatives are delivering measurable results
  • 88% called hybrid cloud an important enabler of digital business
  • 73% agreed that hybrid cloud models create a path for digital business for their organization
  • On average, surveyed executives have reduced IT costs by 24 % percent with hybrid cloud

IBM’s global study of 500 hybrid cloud decision makers reveals that organizations are increasingly integrating cloud resources with traditional IT to accommodate dynamic needs and specific business priorities. In addition, 70% of them report they’ll always have a blend of traditional IT and cloud. The top four reasons cited for moving to the cloud include: 1) improving productivity; 2) improved security and risk reduction; 3) cost reduction; and 4) scalability.

Aside from these studies, recent acquisitions in the cloud computing market such as Cisco/CliQr, are also evidence of the growth of hybrid clouds. As this growth continues, so too does the need for better management and automation tools.  A lot of the big cloud providers are filling this gap by acquiring smaller providers, and while this is a great strategy for them, it can lead to vendor lock-in.

As enterprises move increasingly to a hybrid cloud approach to digital transformation, they should look for independent, third party solutions that can automate and manage their environments independent of cloud service provider’s native tools that only work within that particular cloud environment. This will allow them the flexibility to change cloud providers as needed without having the hassle of moving all of their workloads to another system.

For more information on the move to hybrid cloud and how to avoid vendor lock-in, check out these blogs:

The Future of Hybrid Cloud Management: Q&A with Red Hat’s Joe Fitzgerald

How Your Cloud Sprawl Can Lead to Vendor Lock-in

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Why the Loss of Independent Cloud Management Solutions is Putting Enterprises In Danger

Abstract: The recent spat of acquisitions of cloud management and orchestration companies by large cloud providers could be taking away from the essence of the hybrid cloud, leading to increased risk of vendor/technology lock in.

Large cloud technology and service providers are swooping in to lock-up cloud management solutions at an astonishing rate. Earlier this week, ITapp was acquired by ServiceNow. Last month, Cisco scooped up CliQr. And Red Hat acquired Ansible in 2015.

Depending on who you are and where you stand, these acquisitions can be very good, or very concerning.

On one hand, this is exciting news for these newly acquired firms and others in the cloud management space. CliQr and its investors are taking home $260 million in cash from its deal with Cisco.  It is great validation of the market demand for technologies that help enterprises accelerate and get the most out of their cloud adoption.  But, on the other hand, these deals raise concerns in the marketplace for enterprises that want to pursue hybrid or multi-cloud strategies.

Hybrid cloud – at its core – is the ability to utilize a mix of public and private cloud platforms to meet specific requirements and manage risk.  We’re increasingly seeing enterprise clients migrate workloads from legacy VMware datacenter deployments into a primary public cloud – such as AWS – while utilizing a second public cloud – such as Azure – for disaster recovery or specific workloads.  There may also be a long-term strategy to migrate their private cloud to OpenStack from VMware.  So that’s four different cloud technologies across the enterprise – heading to three in the long term.

But this only works at production scale if you have cloud agnostic monitoring, management and automation tools to help address the complexity and risk.  And, as David Linthicum states in his article for InfoWorld:

“To be effective, cloud-management platforms need to be both technology- and cloud-agnostic. That independence ends once these platforms are purchased by large enterprise vendors that focus on their own cloud offerings, in a bid to steer customers to their offerings.”

With many independent solution providers being taken off the table, organizations looking for cloud solutions may lean towards working with a cloud provider that has bundled these offerings along with their cloud service. This leads to vendor lock-in – when sticking with one cloud provider becomes the only option because all of the orchestration and automation services that underpin an enterprise’s CloudOps are tied directly to that vendor.

But the problem goes beyond large cloud service providers locking customers in their grips. There are a number of benefits to hybrid computing that are lost.

For instance, spreading an enterprise’s infrastructure and applications across different clouds can prevent downtime if one cloud provider is experiencing performance issues.  Utilizing multiple clouds can also increase business agility – or the ability to quickly respond to market changes as they occur. One of the biggest reasons why enterprises need this kind of flexibility is for security purposes. Being able to quickly shift workloads from one cloud to another could be essential for protecting valuable customer information and maintaining critical services.

Locking an enterprise in to just one cloud provider effectively eliminates these benefits and leaves them vulnerable to outages and security threats.

At a time when enterprises desperately need solutions to their multi-cloud management and orchestration needs, the number of independent providers is shrinking as large cloud technology and service companies acquire solutions in order bolster their own offerings.  As David observes, “CliQr was already part of Cisco’s recommended suite for deploying InterCloud, and owning the technology will let Cisco control a key on-ramp to its InterCloud.”

It will be up to the remaining independent cloud optimization technology providers to pick up the slack.

For additional information on the benefits of a hybrid cloud environment, and the role that cloud automation and cloud management solutions can play in enabling hybrid cloud environments in the enterprise, download our eBook, “The Cloud Conundrum,” by clicking HERE.

AWS Summit (Chicago)

The DivvyCloud team will be attending the AWS Summit in Chicago from April 18-19. This event is designed to educate new customers about the AWS platform, and offer existing customers information on architecture best practices and new services.

BREAKING NEWS – DigitalOcean Secures $130M Credit Facility to Finance Continued Expansion and Growth

Citing the need for continued expansion of datacenters in light of extremely rapid growth, DigitalOcean – a cloud platform designed for developers with ease of use in mind – announced that it has secured a $130 million credit facility.

This financing is being led by KeyBanc Capital Markets Inc., which is serving as Joint Lead Arranger and Joint Bookrunner, and KeyBank National Association, which is serving as the Administrative Agent and a lender.  Other participating banks include Barclays Bank PLC, Pacific Western Bank, East West Bank, Opus Bank, Webster Bank and HSBC Bank USA.

The company plans to utilize these credit facilities to purchase equipment in order to continue its global expansion and support increasing demand, while also enabling the development and introduction of new features and products for DigitalOcean users.

“These financing transactions contribute to our goal of building the next generation cloud for software developers,” said Ben Uretsky, CEO of DigitalOcean. “We’ll be releasing new products and features in the upcoming months that will enable larger production environments to scale on our cloud infrastructure.”

DigitalOcean’s growth has increased exponentially in recent history. The company has expanded their registered customer base to 708,000 users that have launched over 13 million cloud servers – which the company calls, “droplets.” This is an enormous increase from just two years ago, when the company touted 253,000 users that had launched approximately 1.5 million servers.

Although many would look at this announcement as indication of DigitalOcean’s successful strategy of appealing to a specific subset of the cloud market – developers looking for an easy-to-use and deploy cloud service – other industry experts see DigitalOcean’s increasing demand as a sign of other larger trends in the industry.

According to Brian Johnson, the CEO of cloud automation and policy compliance solution provider, DivvyCloud, and frequent contributor to Cloud Sprawl, “The growth of DigitalOcean not only illustrates the continued enterprise reliance on the cloud as an easily scalable and agile development platform, it also demonstrates the shift from single cloud environments to hybrid cloud environments, where disparate clouds are utilized for specific purposes within the enterprise.”

Johnson continued, “We’re seeing companies of all sizes implementing multiple private and public cloud solutions across the enterprise as a way to increase resiliency, decrease downtime and to optimize their operations by using the right cloud for the right workload, application or purpose. DigitalOcean’s growth shows that specialized cloud providers are finding space in these hybrid cloud environments because of their ability to offer something other providers don’t.”

For additional information on DigitalOcean securing a $130 million credit facility, read the press release by clicking HERE. For additional information on hybrid cloud environments and their challenges, download our eBook, “The Cloud Conundrum,” by clicking HERE.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Tim McLaughlin Joins Growing DivvyCloud Team as a Strategic Advisor

horizontal_color_200

Tim McLaughlin Joins Growing DivvyCloud Team as a Strategic Advisor

Successful DC entrepreneur and digital visionary Tim McLaughlin joins DivvyCloud to help accelerate business growth and technology innovation

Arlington, Virginia (April 14, 2016) DivvyCloud, a developer of innovative technology to apply active policy enforcement to today’s cloud infrastructure, welcomed Tim McLaughlin to its strategic advisory board. DivvyCloud is growing rapidly, hiring six new employees in Q1 and doubling the size of its team in Rosslyn, VA. Tim brings broad experience and valuable perspectives from his work with multiple fast-growth technology startups.

Tim was a co-founder, CTO, and CEO of Siteworx before transitioning to Chairman of the Board in June of 2015. Tim has a degree in mechanical engineering, with educational excursions into music, language, art, and artificial intelligence. He is frequently cited in tech trade and business media, including the Wall Street Journal, Washington Business Journal, DMNews, and Mobile Commerce Daily.

Tim notes, “While the rapid growth of cloud is a wonderful thing, it does bring a whole new set of challenges which I saw first hand at Siteworx. DivvyCloud not only brings deep, real-time visibility to public and private cloud infrastructure but more importantly, it allows enterprises to set and automatically enforce important policies. I think that’s pretty exciting.”

“We couldn’t be happier to have Tim join as a Strategic Advisor,” shared Brian Johnson, DivvyCloud CEO. “His deep technical expertise, knowledge of enterprise IT customers, and experience with high growth startups represents a wealth of knowledge we plan to leverage.”

Follow DivvyCloud on Twitter (@DivvyCloud), the blog www.cloudsprawl.net, or visit the website at www.divvywebsite.staging.wpengine.com

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating the lifecycle management of IT resources across leading public/private cloud technologies.  DivvyCloud was founded by seasoned technologists who understand first hand what is necessary to succeed in today’s dynamic, multi-cloud world.  DivvyCloud delivers multi-cloud resource visibility, provisioning, automation, monitoring and migration capabilities to improve security, reduce costs, and maintain operational agility.

Media Contact
Peter Scott
DivvyCloud, Chief Strategy Officer
peter@divvycloud.com

Splunk, DC

DivvyCloud will be attending SplunkLive! DC on April 12th. Splunk software is used to deepen business and customer understanding, mitigate cybersecurity risk, prevent fraud, improve service performance and reduce cost.

VIDEO: Demonstration of Kubernetes in action

In our last article on Cloud Sprawl, we featured video from the most recent Washington, D.C. and Arlington Infrastructure as Code Meetup.

This Meetup, which occurred in early March, featured Ruben “Ben” Cessa Flores of the Inter-American Development Bank presenting to the group about Kubernetes, a Google-developed application that enables users to manage containerized applications in a clustered environment.

During his presentation, Ben discussed how Kubernetes can enable the management of distributed components across varied infrastructures, how he was utilizing Kubernetes in his position at the Inter-American Development Bank, and how others can use the solution.

In the second part of his presentation, Ben provided a live Kubernetes demonstration to those in attendance. Here is video of Ben’s demonstration:

To see the first part of Ben’s presentation about Kubernetes, click HERE.

 

How to Combat Security Risks in the Cloud

In his article, How Companies Can Minimize Their Cloud Security Risk for the Wall Street Journal, Robert Plant (@drrobertplant) addressed many hazards that can arise when enterprises move applications and data to the cloud. He states that while the shift to the cloud can improve productivity and efficiency, “executives are increasingly being faced with balancing the benefits of productivity gains with significant concerns around compliance and security.”

Robert explains that a key area of concern is employees’ unsanctioned use of cloud services and applications. Some companies realize and accept this trend is just part of today’s decentralized business structures and need for agile solutions, while others are unaware of what is being used, who is spinning up cloud resources, or what the end cost is to their company.

When employees provision cloud resources to increase efficiency or improve processes on their end, they don’t necessarily realize that they’re opening the enterprise up to various security risks.  This can include data sovereignty and geographic concerns, unencrypted data, ownership claims by cloud service providers, and IP infringement.

While these issues are clearly important, challenges related to cloud computing extend far beyond high-level legal and business risks.  At its most basic level, software defined cloud infrastructure allows developers, engineers and even business people – some who happily run their child’s Minecraft server in the AWS cloud – to programmatically provision cloud resources all over the world. They can open security holes to work with cloud resources from home or Starbucks. This makes data leakage from lax permissions, or “super-user” access by the wrong employees an everyday occurrences in the cloud.

Protecting the Hybrid Cloud

Robert outlines ways to minimize risk, stating that there are strategies to allow employees to explore new tools and resources within safer operational frameworks.  But he doesn’t go beyond improved monitoring, or establishing better procurement policies and education efforts.  While that is good, high-level advice, there are more strategies and tools in the arsenal of today’s CIO and CISO – one of which is advanced cloud automation and dynamic optimization solutions.

IT leaders need to look for three key capabilities when exploring dynamic optimization technologies to help manage today’s cloud (often hybrid-cloud) deployments.

The first is a robust monitoring solution that senses and consolidates real-time date and state changes across different cloud deployments.  It’s better to pull data directly off your cloud infrastructure using native API’s vs. integration with reporting tools provided by the cloud providers that may limit or “interpret” data within their dashboards.

Second, smart solutions need to understand cloud infrastructure changes in context of dependencies and interconnections between cloud compute, networking, security and storage.  For example, high CPU utilization may mean nothing more than a need to add capacity to keep up with increasing customer usage.  However, high CPU usage combined with new users on a system, changes to security rules and spiking outbound network traffic could mean that cloud systems have been hijacked to perpetrate a DDOS attack (see Chris DeRamus’ post on this topic).  Just tracking isolated information about individual cloud resources or applications is not enough.  Smart optimization technologies need to understand how cloud infrastructure components work together and draw inferences to potential risks.

Lastly, active policy automation is needed to respond in real-time and self-heal cloud infrastructure into compliance when humans and systems inevitably color outside the lines. The dynamic nature of the cloud quickly outstrips human capacity to manage effectively. By automating responses to the “known-knowns,” the IT team can concentrate on troubleshooting the “unknowns,” conducting deep analysis of risks (security, cost and performance) and planning for continued scalability.

A cloud automation solution with these features allows CISOs to know when changes occur across their diverse cloud infrastructure; understand key interdependencies and potential impact radius; and automatically take action solving issues as they occur and closing vulnerabilities. Furthermore, these same cloud optimization technologies can detect inefficiencies and potential waste, and take automated action to drive savings.

Combined with the guidance from Robert, embracing a cloud automation solution with these capabilities can allow CISO and Enterprise IT departments to deliver the benefits of cloud computing while ensuring compliance with evolving security, cost and performance best practices.

For additional insights into the security challenges enterprises face when moving to the cloud, download the free Webinar, “Securing the Cloud – How Cloud Adoption Impacts the Enterprise,” featuring Robert “Bob” Bigman, by clicking HERE.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Video: An introduction to Kubernetes with Ruben Cessa Flores

At the end of last year, members of the Washington, D.C. and Arlington Infrastructure as Code MeetUp Group met for their first event – a MeetUp that featured presentations by Alex Duan, the CEO of Dynamo Technologies, and Brian Johnson, the CEO and Co-founder of DivvyCloud.

During their presentations, Alex delved into Docker, why it’s interesting in dev-ops environments, and whether he felt it’s ready for enterprise use. Brian used his presentation as an opportunity to demonstration three different ways to use Python code to manage an enterprise’s cloud environments.

After this well-attended and successful MeetUp, the group came back together again in early March for another exciting presentation. This time, Ruben Cessa Flores of the Inter-American Development Bank presented to the group about Kubernetes, a Google-developed application that enables users to manage containerized applications in a clustered environment.

During his presentation, Ruben (aka: Ben) discussed how Kubernetes can enable the management of distributed components across varied infrastructures, how he was utilizing Kubernetes in his position at the Inter-American Development Bank, and how others can use the solution.

Here is Ben’s full presentation, up to his actual demonstration, which we’ll be posting in a subsequent video on CloudSprawl:

[/vc_column_text][/vc_column][/vc_row]

Cloud is Moving Enterprise IT from Total CIO Control to Virtually None

Enterprise IT is changing, but not all of its leaders are following suit. When talking about cloud computing and storage, it’s important to recognize where the industry is headed. In his recent InfoWorld article, CIOs, stop dreaming about total cloud control, David Linthicum (@DavidLinthicum) predicts that CIOs will not be able to sustain a total control approach to their public cloud services. Gaining complete control will take years (assuming it can be achieved at all); meanwhile, enterprise demand for cloud services will exceed IT’s ability to manage costs, capacity and cohesion.

Data storage has traditionally been contained in private data centers that are totally run and controlled by the CIO and IT department. But the cloud has changed that. Today, a shift is taking place that allows business units and project teams to access solutions directly by shopping for the applications that best suit their needs. This has led to some departments going around IT and acquiring cloud solutions without their knowledge. This could amount to thousands of different virtual resources being used, leaving gaps for security risks and unnecessary costs.

This shift toward on-demand computing is putting traditionalists in an uncomfortable position, and they are left facing difficult decisions on how to manage their cloud infrastructure. David summed this up in his article when he said:

“Rogue clouds led in the past, with departments going around IT to make deals with SaaS and IaaS cloud providers. Today, those chickens have come home to roost, and IT is trying to make sense of the hundreds of cloud services that have been tossed back at them to manage.”

CIO’s and Enterprise IT departments are now challenged with delivering the value of cloud without killing the benefits through heavy-handed, command & control governance. This will not be accomplished with a total control culture.

David lays out two possibilities for CIOs. The first is to demand complete control over all systems the enterprise is using. But that won’t work in the age of DevOps and cloud computing, where enterprises incorporate private cloud with public and hybrid cloud applications. With the freedom to customize the cloud experience, control all of the independent elements will be time consuming, costly and allow too much room for error.

The second option is to let go of control and become a broker/facilitator, managing the relationships between various cloud technologies, accounts and resources. By implementing modern cloud management technology, CIOs can focus on choosing the right products and solutions for the enterprise’s business goals, while using automation and dynamic optimization to save time, reduce waste and ensure security.

What can CIOs Gain if they let go?

The future of the computing infrastructure will see more enterprises adopting cloud capabilities from a mixture of internal and external cloud sources, also known as hybrid cloud computing. While this approach will require CIOs to release some of their control over cloud features, this method has a number of benefits that outweigh the need for complete centralization of data storage, including:

  • Cost Savings: The hybrid cloud model gives enterprises the ability to keep the systems that work and add others that help those existing systems to work together.
  • Flexibility: Enterprises do not have to be limited, working between both public and private clouds depending on specific requirements.
  • Visibility: Enterprise IT will have the ability to see activity across all platforms and engage the speed and agility of cloud without abandoning their overarching responsibility for security, cost and performance.

But these benefits can only be realized if CIOs relinquish control and utilize today’s advanced cloud management and cloud automation solutions to ensure that individuals within the enterprise are following the rules when spinning up their own cloud resources.

To learn more about the role that cloud management and cloud automation can play in enabling the benefits of cloud while protecting against security vulnerabilities and cost overruns, download the following resources:

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Dropbox drops AWS to embrace hybrid cloud

In a recent post on Cloud Sprawl, I looked at the “battle” between private cloud and public cloud, and discussed why those alleging that private cloud was down and out may be getting ahead of themselves.

Ultimately, despite the incredible revenue numbers for public cloud companies, and the high number of virtual machines being created – two metrics often cited to prove public cloud’s dominance – the trend that many IT experts, analysts and cloud pundits are predicting involves a move to a hybrid cloud approach. The factors precipitating this ultimate end-state include overall cost of cloud deployments, increasingly sophisticated requirements, and the feeling that private cloud environments can be more secure.

Now there’s some additional evidence that public cloud-only environments may not be the future. Just this week, it was announced that Dropbox – the online file hosting service – is moving much of their storage away from their chosen cloud provider – Amazon Web Services (AWS) – and into their own private cloud infrastructure.

The reason for this move is simple economics. At the size that Dropbox has grown to – which the company claims is more than 500 million users – the cost to run their service on AWS infrastructure was becoming extremely high. That recurring cloud expense was cutting into the company’s margins and the feeling is a capital investment in their own private cloud infrastructure will be more cost effective over the long-run.

Dropbox is migrating the majority of their users – and their users’ files – into their own private cloud infrastructure. However, that doesn’t mean they’re abandoning AWS and public cloud altogether. The company will continue to use the service in a very smart way – to keep files in specific geographic regions where they don’t have their own infrastructure. This is in response to customers who – due to government rules or other regulations – need to keep sensitive files within their own country.

Dropbox’s move to a hybrid cloud – with an infrastructure comprised of both private and public cloud resources – makes sense for their company. And I anticipate that it will make sense for other large companies in the near future as well.

Public cloud is a solid solution for small companies looking to build out their networks and solutions without the need to shell out big bucks for their own datacenters and private cloud infrastructures. However, as companies grow and scale, the cost of these public clouds can grow along with them. Eventually, large companies can reach a point where certain workloads simply don’t make economic sense to keep in the public cloud – where the more they use, the more they pay.

All that said, operating a Hybrid-Cloud model – especially at scale – comes with daunting challenges. Both public and private clouds represent software-defined, virtual infrastructure that is very dynamic and often opened up to on-demand, self-service provisions by engineers, developers and others that need compute, storage and networking capacity. Trying to maintain visibility, control and consistent processes across disparate cloud planes is not an easy task. This can make management of a hybrid cloud environment cumbersome, and even leave an enterprise’s network infrastructure vulnerable to attack.

The answer is to adopt cloud management and cloud automation technologies that effectively bring control of all cloud resources – public and private – into a single operational framework. This single pane of glass for managing an enterprise’s cloud ecosystem not only makes it easier to move workloads from one cloud to another – when necessary – but also makes it easier to manage and secure clouds by automating compliance and ensuring changes are consistently implemented across all cloud resources.

Cloud automation can even proactively protect companies from cost overruns and security breaches by reacting in real time to changes and bringing a cloud’s security settings back into compliance, and limiting unnecessary cloud servers and costly capacity when it is not really needed.

As more enterprises move to the public cloud – Forrester’s Dave Bartoletti estimates only about 30 percent already have – they’re going to start to experience the pain from surprisingly large invoices and have to make a decision about what kind of cloud environment is truly most cost effective for them. And a large number will discover the same thing Dropbox did, that it makes economic sense to move to a hybrid cloud environment.

I anticipate that we’ll see a sharp increase in hybrid cloud adoption in the near future. And – with it – will come a sharp increase in cloud automation and cloud management solution adoption to help overcome the challenges that enterprises face managing hybrid cloud environments.

For more information on the challenges of managing hybrid cloud environments, and cloud security trend, click on the following resources:

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

RSA 2016: Observations from the Show Floor

As the biggest IT security show in the industry celebrated its 25th year, RSA 2016 attracted a record number of more than 40,000 attendees.  And while this is very impressive, a majority of the companies exhibiting at RSA offer the exact same thing. In essence, they are all offering some type of protection against cybersecurity threats and many of these solutions are very similar in nature.

Overall, I found the content of the sessions I attended to be relatively weak in substance, but there were no shortage of new product announcements from vendors and there were some common themes throughout the show that are worth noting.  Here are a few of my observations from this year’s show:

Focus on Endpoint Protection

The focus of the show has shifted away from anti-virus and firewalls. Firewalls have reached their limit and hackers know how to break them. Anti-virus is like a cat and mouse game with vendors continually updating software to keep up with the latest viruses being created. Now, the focus at RSA and what everyone in the IT security industry is talking about, is “endpoint protection.” What this really means is device protection, including virtual devices. Threat management has gone from physical to any kind of device and the big area of concern is no longer the threat to files, but threats at the network layer instead. There are holistic algorithms that look at network traffic anomalies and kick detection software into gear. And, there was a lot of focus at RSA around data visualization so companies could see where attacks were happening on their networks.

Growing Confidence in Cloud Security

The Cloud Security Alliance (CSA) had a big presence at the show. According to this Dark Reading article, the CSA reports that “enterprises have overwhelmingly begun adopting the whole gamut of available cloud service delivery models including software-as-a-service, platform-as-a-service, security-as-a-service, identity-as-a service and data as service. However, managing security and compliance of these disparate cloud services continues to be a challenge for enterprises that is exacerbated by the inconsistent quality of the security controls offered by cloud vendors.”

Despite these challenges, CSA found that nearly 65 percent of some 200 IT and security professionals surveyed expressed confidence in cloud security vendors offering as good or better security than on-premises security controls.

Shortage of Skilled IT Security Workers Continues

In his RSA keynote speech, Christopher Young, Intel Security Group’s senior vice president and general manager, talked about the shortage of skilled workers. He pointed out that 10 years ago, security firms faced roughly 25 new threats each day. Today that number is more than 500,000. With this exponential growth in daily cybersecurity threats, there is a need for more skilled IT professionals to prevent and protect against them. According to ESG research, 46% of organizations claim to have a “problematic shortage” of cybersecurity skills. And of all cybersecurity skills, 33% of organizations say that their biggest deficiency is in cloud security skills. I heard this a lot throughout the show and it is something the industry has been talking about for quite some time. Clearly there is an opportunity here to focus on education and training by security vendors and the tech industry at large especially as cloud adoption in the enterprise continues.

Governance/ Compliance an Afterthought

I was surprised to see very little focus on compliance and governance with the exception of a few companies that were offering privileged access management solutions at the show. The policy/control side of cybersecurity was over shadowed by threat protection solutions. One reason for this could be that if IT budgets are limited, more companies will invest in anti-virus software first. A few years from now we could see more packaged solutions that combine anti-virus software with access management and access control.

I found that only a small percentage of the companies at RSA were talking about managing network threat protection — in the cloud or in-house – from one IT infrastructure. One way companies can reduce the risk profile for what they are deploying in the cloud and at the same time simplify the management of it all is via cloud automation.

Cloud automation and management technologies can provide enterprises with a way to deliver the value of cloud without sacrificing visibility, and compliance with security, cost and operational best practices. And, instead of hiring dozens of people to continually monitor and respond to their cloud infrastructure, these technologies can sense and respond in real time to known security issues and save enterprises precious time and resources.

Given the current shortage of skilled workers, this could ease the burden on companies looking to move more applications into the cloud. I think we’ll see more of this approach when it comes to cloud security in the near future.

For more on this topic, check out this Cloud Sprawl webinar with former Chief Information Security Officer of the CIA Bob Bigman, “Securing the Cloud – How Cloud Adoption Impacts the Enterprise.”

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Private vs public cloud – is the fight as one-sided as it seems?

In February, TechRepublic writer, Matt Asay, published an article where he analyzed the results of a study that showed a significant increase in the adoption of private cloud. The study – which gathered data by surveying more than 1,000 IT professionals – found a 14 percent increase in private cloud adoption over the previous year, and a 13 percent increase in hybrid cloud adoption.

That sounds impressive. However, Matt claimed the numbers were somewhat misleading, since they showed private and hybrid cloud adoption in a vacuum.

By looking only at private and hybrid cloud growth, the study didn’t truly show the whole picture. According to Matt’s article, the state of hybrid and public cloud adoption isn’t as rosy as this study would have you believe. And he validated those claims by doing something that makes sense – looking at the revenue numbers.

According to Matt, “…clear away the smoke and here’s the fire: Amazon Web Services, the dominant public cloud provider, generated $2.4 billion in revenue last quarter, growing 69% year over year. That growth dramatically outpaces the private cloud vendors by a gargantuan margin.”

That’s pretty damning evidence that shows the dominance of public cloud over private cloud in enterprise IT departments today. Or is it?

I would advise caution for those that look at revenue numbers as an indicator that shows everything happening in the market. I would also advise against using them as tea leaves to see where the industry is going. Private cloud adoption is coming, and it will be a large part of hybrid cloud deployments across enterprises in the coming years. In fact, it could get off the mat and back into the fray in much earlier rounds than anyone anticipates.

First, let’s talk about a problem with public cloud services that almost every enterprise is experiencing, and – ironically – creates an issue with the metrics that Matt used to show public cloud’s dominance in the first place. Namely, virtual infrastructure reproduces faster than bunnies.

Many companies that embark on a journey to utilize public cloud services for virtualizing their infrastructure often find that the cost to do so winds up being double or even triple what they originally anticipated.

There are a number of reasons for this, but part of it is because developers and IT professionals often overestimate what they need, and ramp up more cloud resources than are truly necessary. The feeling is often that it’s better to have too much than not enough, and cloud resources are often over-procured and subsequently under-utilized.

Then, due to a lack of transparency, insight and management, public cloud resources breed other unintended costs. Resources that are no longer needed are forgotten about and left there to languish and cost money. Developers and other entities across the organization ramp up more and more clouds with little to no IT insight or knowledge, leading to a wide ecosystem of clouds that are all being billed – somehow – back to the company.

This could factor into the huge revenue growth of AWS and other public cloud providers. It can also factor into another metric Matt used to show public cloud’s dominance – the number of VMs stood up in public cloud environments in contrast to those in private cloud environments.

The fact is, public cloud providers are companies that aren’t going to go out of their way to help their customers spend LESS on their solutions. They’re perfectly happy having companies hemorrhaging money on their offerings. And this waste could be inflating their growth, while also making enterprises reevaluate their cloud strategies.

That being said, public cloud will continue to be essential across enterprises and their IT departments. However, I anticipate that private cloud offerings will be adopted in conjunction with them, yielding hybrid cloud environments that deliver the strengths, benefits and capabilities of both. And there’s some pretty solid evidence that this shift is underway.

Just this month, Rackspace made the announcement that it was moving 90 of its more than 6,000 employees from its public cloud division. This may not seem substantial or indicative of anything, but the company claims they were moved to areas of “fast growth,” and those areas were – you guessed it – hybrid and private cloud.

Also, new private cloud solutions are entering the marketplace that offer capabilities and benefits that could upset the apple cart and drive more enterprises to embrace private cloud environments. One of these solutions is Microsoft’s Azure Stack, which Cloud Sprawl contributor – Chris DeRamus – discussed in an excellent post earlier this month.

Those same enterprises will also increasingly turn to cloud automation and cloud management to help tame their public cloud costs, while also helping to more effectively and efficiently manage their hybrid cloud environments. And this adoption will only increase as enterprises realize just how large of a contributor they are to those “gargantuan” revenues that public cloud companies are generating.

Public and hybrid cloud may be down, but it’s not down for the count. Enterprises are quickly waking up to the financial issues that public cloud solutions can create, and are looking to hybrid and private cloud environments to provide a more cost-controlled, secure alternative.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Discovery Communications turns to cloud and cloud automation to deliver necessary flexibility and scalability

Launched as a single channel in 1985, Discovery Communications is now a leader in global entertainment reaching three billion cumulative subscribers in more than 220 countries and territories through its global brands – led by Discovery Channel, TLC, Investigation Discovery and Animal planet, and providing countless hours of high-quality programming across platforms and around the world.

Whether its providing necessary IT services to support Discovery’s global business and workforce or making content available to growing audiences, the cloud has become an essential part of Discovery’s IT infrastructure. The decision to embrace the cloud within the organization was driven by needs for flexibility, security and scalability that only the cloud could address.

And, to manage the cloud infrastructure that it’s built, Discovery Communications recently implemented a cloud management and automation solution designed to increase transparency and simplicity in the management virtual resources at scale across different public and private cloud technologies.

To get a better understanding of why Discovery has embraced the cloud, the challenges that arise in such large, complex implementations, and the reasons why Discovery turned to cloud automation to overcome these challenges, we recently interviewed Dave Duvall, the Senior Vice President of Infrastructure and Support Services at Discovery Communications.

[/vc_column_text][vc_column_text]During our discussion, Dave elaborated on Discovery’s hybrid cloud approach, why they moved in this direction, and the role that cloud automation and cloud management are playing in managing this hybrid cloud infrastructure.

 

Here is what Dave had to say:

Dave DuvallCloud Sprawl: Where is Discovery in its “cloud journey?” What factors drove Discovery to adopt cloud services?

Mr. Duvall: Technology is the backbone of all that we do at Discovery, and we are focused on a long-term, highly-flexible roadmap that supports Discovery’s global business and employees.

Today, most of Discovery’s digital assets are in the cloud, and we are actively rolling out next-generation, cloud-based environments across business systems. Ultimately, with a company of our size, diversity and complexity, cloud technology enables a necessary flexibility that legacy systems do not.

Cloud Sprawl: What unique IT challenges does a large media company like Discovery face? How does the cloud help Discovery meet these challenges?

Mr. Duvall: Global media companies like Discovery Communications face a multitude of challenges – from the vast amounts of content and data created and shared daily to information security, which everyone is focused on. Cloud solutions allow us create efficiencies, such as right-sizing our storage needs on a real-time basis and ensuring our employees and external partners can quickly and easily access the content, information and tools they need anytime, anywhere.

Beyond the inherent scalability and flexibility, the cloud is providing a unique opportunity for a greenfield design of our infrastructure capabilities. We are more easily able to address emerging security standards and design challenges than we would with physical data centers.

Cloud Sprawl: Describe Discovery’s hybrid cloud approach. What benefits and challenges does a hybrid strategy create for the company?

Mr. Duvall: As with many enterprises, a hybrid strategy is vital to our success. The migration of thousands of workloads does not happen overnight and ensuring that we have command and control of our legacy, as well as go-forward platforms, is vital.

The challenge for any enterprise implementing a hybrid cloud approach is effectively managing the old infrastructure, while investing time, talent and attention to design the future.

For this reason, we have elected to govern our migrations through two primary buckets: 1) “Lift and shift” workloads are managed essentially using legacy tooling. While this is the least efficient operating model, it is sometimes necessary. 2) “Cloud native” workloads are designed from the ground up to support automated deployment, auto-scaling and other cloud features to enhance availability, efficiency or ideally both.

Cloud Sprawl: How does delivering cloud infrastructure to users that expect self-service capabilities change/impact the role of corporate IT?

Mr. Duvall: The self-service aspects are challenging my organization to really mature as a service provider, something I am very passionate about. It all starts with a product-centered philosophy on operations, paired with high levels of customer engagement, alignment of products and services to customer demand and a focus on continuous improvements.

Today’s corporate IT engineering teams must think like product managers, which is something many wouldn’t have envisioned years ago.

The reality is cloud technology provides a lot of choice, and we have to continually measure ourselves against outside offerings to ensure we are staying aligned to our customers – either by building or brokering solutions.

Cloud Sprawl: We understand you are leveraging a cloud automation and management platform from DivvyCloud among other toolsets for helping to manage your hybrid cloud infrastructure. Why was cloud management and cloud automation important for Discovery?

Mr. Duvall: There was a lot of discussion among our Engineering and Architecture teams about the long-standing metaphor of servers as pets vs. herds. We quickly agreed that a core operational tenet would be towards an “automate everything” philosophy, logging into our servers to troubleshoot or fix would be an exceptional event and an operational failure.

There are several reasons for this – from wanting full visibility into our running state environment at all times and continuing our mission to efficiently deliver solutions to the enterprise to leveraging infrastructure as code to quickly redeploy and recover in the event of failure or a bad deployment.

Automation allows us to leverage autoscaling, workload grooming and other approaches to ensure our costs are managed and, most importantly, contained.

Cloud Sprawl: What were you looking for in a cloud management/automation solution? What considerations led you to choose DivvyCloud?  How will their solutions help Discovery enable its cloud adoption strategies?

Mr. Duvall: We chose not to concentrate on traditional Cloud Management Platforms, as our initial focus was not on a self-service portal but more on foundational capabilities and insight into our environment and cloud consumption. We began with a few key needs:

1) An ability to detect cloud change events and to respond with policy enforcement against items like tags, security groups, etc.;

2) A solid UI for dash-boarding and rollup, as well as all functions to also be accessible via an open and documented API; and

3) Extensibility to ensure our automation and management platforms can quickly adjust to new use cases and technologies as we mature.

With these in mind, we found DivvyCloud to be a perfect fit for our needs – a very open and extensible platform that will allow us to mature our cloud management and automation.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

Cloud services open the door for Federal flexibility and cost savings

Cloud computing and cloud services are some of the hottest and most talked-about technologies in the Federal government – and for good reason. The Federal government is constantly looking for ways to improve operational efficiency and slash costs – especially in its current, difficult budget environment.

The ability to pay for only what is needed makes cloud solutions a better, more cost effective approach to acquiring IT infrastructure than the traditional model of building large infrastructures that are only partly utilized, and then paying for their maintenance and support. Government agencies are also facing multiple mandates and initiatives to improve continuity of operations (COOP), workplace flexibility and employee mobility – all of which are areas where the cloud can pay significant dividends.

One of the ways that the Federal government is looking to implement the cloud is through operations and management services, which deliver cloud solutions that best fit an agency’s needs and requirements.

To learn more about why the cloud is increasingly essential in the Federal government, how operations and management services are making the cloud a reality across the government, and the role that cloud automation and management is playing in delivering those services, we sat down with Terrill Andrews of Accelera, a leading provider of virtual computing solutions. Here is what Terrill had to say:

 

acceleraCloud Sprawl: What challenges/mandates/requirements are they facing that are making a move to the cloud imperative? What benefits are they looking to derive from the cloud?

Mr. Andrews: A big driver for cloud adoption that we’re seeing in the Federal market is a desire to get off the hardware/software lifecycle treadmill.  This activity consumes a tremendous amount of time and energy, and one of the many benefits the cloud has to offer is condensing the annual budgeting cycle.

As public cloud providers mature – and as the market’s cautious perceptions around cloud security wane – it becomes harder and harder to avoid the reality that making a move to the cloud is the right thing to do.

Ancillary to that mandates like FedRAMP and the Federal Cloud Computing Strategy provide the security framework and justification Federal IT managers need to go to the cloud.

Cloud Sprawl: Why are operations and maintenance services a desirable way for the Federal government to implement cloud solutions? What added benefits do operations and maintenance services deliver? What specifically about government agencies makes these services a better cloud implementation option?

Mr. Andrews: Federal IT programs have historically relied on operations and maintenance contracts to provide the operational assurance the government needs to support their IT systems.  As cloud enters the IT landscape,  it brings  with it the concept of the Service Level Agreement (SLA) to provide that same operational assurance.  Baked into a Federal cloud contract, a properly designed and financially backed SLA can provide the operational assurance an agency needs for peace of mind.

The real value that a managed service, cloud or otherwise, provides is cost savings.  Having to hire, train, and manage an in-house IT staff to provide operational assurance for IT systems can be tedious at best and wasteful at worst.  Federal IT managers have to maintain the right skills mix and scheduling for the team, as well as provide career path, mentoring and benefits for each of the individuals.

In addition, there are peaks and valleys of effort associated with the ongoing management of any IT environment.  During peaks you may not have all the people you need to get the job done. When you’re in a valley you still have to pay for your team’s idle bandwidth which is wasteful.

A managed service provider can provide a better service at a better price with a money back guarantee, and – in the process – hide all of the complexities of managing a dedicated IT team.

Cloud Sprawl: What challenges did Accelera face when moving into the cloud managed services space? What management, technology and client management challenges did managing cloud implementations for government agencies create?

Mr. Andrews: The biggest challenge Accelera has faced, and continues to face, with selling cloud and associated managed services is developing a customer’s trust.

A managed service is based on trust between you and your customer.  The customer is essentially handing you the keys to their kingdom.  They’re not going to put their job on the line if they’re not absolutely certain that you can deliver excellent service to them.  Customer trust was developed in our case through thoughtful planning and continuous monitoring and refinement of our service model.

Almost universally our cloud customers require some level of management and visibility into their environments.  Finding the right mix of technologies and developing the right processes to meet this need was no simple task.  It required a lot trial and error, but ultimately our dedication and commitment to our customers drove us to a system that works.

Cloud Sprawl: What role are cloud automation and cloud management platforms playing in helping Accelera overcome these challenges? What capabilities are they delivering and how is that helping the company deliver its managed cloud service offerings?

Mr. Andrews: The role that cloud automation and management platforms play is extremely important in that they provide the visibility and control that our customers demand for their cloud hosted IT systems.

Cloud automation and management platforms generate customer demand for us by providing capabilities like ensuring compliance with IT policies, as well as features like scheduled shutdowns to help customers control cloud costs.  These capabilities are essential for customers and offer competitive differentiation for us in the market.

Cloud Sprawl: What was Accelera looking for in a cloud automation platform? What features and capabilities were necessary?

Mr. Andrews: Cloud automation and management platforms generate customer demand for us by providing capabilities like cross-cloud visibility and reporting, ensuring compliance with IT policies, as well as features like scheduled shutdowns to help customers control cloud costs.

These capabilities are essential for customers and offer competitive differentiation for us in the market.

To learn more about Accelera, its cloud automation implementation, and how cloud automation is a differentiator for the company, download their case study by clicking HERE.

Get Started with an Enterprise Trial

Deploy an enterprise version of BotFactory

DivvyCloud Wins Customers and Recognition for Cloud Infrastructure Automation Technology

horizontal_color_200

DivvyCloud Wins Customers and Recognition for Cloud Infrastructure Automation Technology; Doubles Staff in 2015

DivvyCloud enjoyed a breakout year in 2015 with new customers including General Electric, Discovery Communications, SoftwareAG Government Services and Accelera Solutions

Arlington, Virginia (February 10, 2016) DivvyCloud, a developer of innovative technology to automate and manage today’s cloud infrastructure, achieved a breakout year in 2015.  Explosive growth in enterprise adoption of public and private cloud technologies is driving requirements for scalable, automated solutions to ensure security, cost and operational compliance across deployments of virtual “cloud” infrastructure.  DivvyCloud offers on-premise and hosted versions of its cloud agnostic, extensible platform allowing customers to automate cloud infrastructure and policy enforcement.

New Customers
DivvyCloud’s technology has attracted new customers, large and small, helping accelerate and scale cloud strategies with multi-cloud visibility, lifecycle management, and compliance automation.  Customers value DivvyCloud’s pervasive, real-time discovery of cloud resources and the ability to take automated action regardless of how those virtual resources are provisioned.  Customer DevOps and Enterprise IT teams have complete flexibility to create their own custom automation to monitor and self-heal cloud deployments, or leverage DivvyCloud’s growing repository of pre-built “Automation Bots.”

Business Growth
DivvyCloud raised a $1M second seed round from its existing investors in March to bring total invested capital to approximately $1.9M.  With the commitment of new customers, DivvyCloud exceeded its financial goals in 2015 achieving 7x growth in monthly recurring revenue in its first full year of commercial operation.  The company doubled its staff in 2015 and plans additional hiring to scale the business in 2016.  To accommodate rapid growth, DivvyCloud moved into new offices in Rosslyn, VA with easy access to Washington DC’s pool of technical and start-up business talent.

Recognition
DivvyCloud was named a Gartner 2015 “Cool Vendor” in Cloud Management.  The innovative start-up also enjoyed coverage from other leading analyst firms including 451 Research and Forrester Research.

Follow DivvyCloud on Twitter (@DivvyCloud) and LinkedIn or visit the website at DivvyCloud.com

About DivvyCloud
DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating the lifecycle management of IT resources across leading public/private cloud technologies.  DivvyCloud was founded by seasoned technologists who understand first hand what is necessary to succeed in today’s dynamic, multi-cloud world.  DivvyCloud delivers multi-cloud resource visibility, provisioning, automation, monitoring and migration capabilities to improve security, reduce costs, and maintain operational agility.

Media Contact
Peter Scott
DivvyCloud, Chief Strategy Officer
peter@divvycloud.com

Infrastructure as Code

CEO Brian Johnson will demonstrate how to use Python to create and manage software defined infrastructure at Infrastructure as Code first MeetUp.

Security Webinar

Join retired CIA CISO Bob Bigman and Brian Johnson of DivvyCloud as they discuss security and other enterprise considerations in moving to the Cloud.

Get Started with an Enterprise Trial

Deploy an enterprise version of DivvyCloud