Kubernetes is essentially a container orchestration platform. It is NOT a container runtime (software that executes containers and manages container images on a node), meaning it doesn’t replace Docker.  Docker is what Kubernetes uses on each node (a worker machine which may be a VM or physical machine depending on the cluster) to run the containers. Kubernetes is in charge of deploying the containers to the specific nodes that have the capacity, or where they need to be based on labels or tagging. It can also handle dealing with the software-defined networking layer that allows the containers to talk to one another, and services like Load balancing all inside the Kubernetes cluster.

What Kubernetes is not, as mentioned before, is a runtime.  You still have Docker underneath the “covers” running a lot of these containers, though that may change in the future as Google begins to bring on more container runtimes like gVisor.  But for now, Docker is still a required component.

Why Cross Cloud?

  • Business – Avoid vendor lock-in and ensure the best price per resource
    A lot of companies don’t want to be locked into a vendor.  We all played the Datacenter game and know how painful it can be when you get locked into a long-term contract.  It doesn’t provide you the flexibility you need as an enterprise. Furthermore, you really can’t engage in optimal pricing negotiating if they know they’ve “got you by the tail.”  They know you can’t leave their environment, so avoiding vendor lock-in is just a good business practice for you to make the best business decisions.
  • Stability – Keep applications online, even during a catastrophic cloud outages
    Cloud outages do happen.  Last year someone took down all of the Amazon east coast by running the wrong rm-rf command on node. These things happen and the reality is technology fails, but people fail way more often. We are likely to see more stability issues as systems become more and more complex. So it’s important to spread yourself across multiple clouds to ensure that your application is still up and running and making money in the event one of the cloud providers goes down.
  • Best in class services – Take advantage of the best service cloud providers have to offer.
    Providers are beginning to commoditize meaning the infrastructure layer is becoming the same across all providers. Servers, load balancers, etc. are all generally the same and operated the same.  The differentiators are Google focusing on AI and machine learning, and Amazon has some excellent database service technologies like their Relational Database Service (RDS) and DynamoDB.
  • Security — protect your data by replicating across multiple data storage systems.
    We are seeing a lot more ransomware out there, and companies being held hostage if their Amazon account is compromised.  You need to make sure you spread yourself out so in the event you are compromised, you can protect yourself and isolate the area that has been compromised. This will allow you to maintain your running applications and deal with the situation in the other cloud provider.

Kubernetes makes all of this possible because it allows for effortless application portability.  You can move applications from one server to the other server, in fact, the Kubernetes cluster is going to be doing it all the time for you as part of the orchestration layer.

How Do You Deploy Across Multiple Clouds?

Kubernetes believes in a multi-cloud environment, if not just because of Amazon and Google, also because there are many enterprises that still run a lot of data center workloads and probably will be for some time to come.  Public cloud is not always cheaper, and not always the best option, so you need to make sure you can create a Kubernetes cluster that works across multiple locations whether it be in public or private clouds. This is very important to Kubernetes.

This is what our application looks like today. We’ve pre-configured an Amazon environment and a Google environment both with a VPC. Amazon has the “.200,” and Google has the “.201.”  They have DNS, internet connectivity, and everything you would need necessary to run. This is what it would look like if you deployed Kubernetes Clusters in each one of those environments: you’d have independent load balancers, independent Kube apps, and Kubernetes clusters, independent databases, etc. However, you want to take the next step of linking these networks together so you can deploy a Kubernetes Cluster across both VPCs simultaneously.  So how do you go about doing that? Find out by watching our videoHow to Architect Kubernetes to Support Multi-Cloud Applications” where our CEO Brian Johnson walks you through best practices for designing Kubernetes to enable multi-cloud.

If interested in a brutal comparison of cloud-hosted Kubernetes providers, check out our article “Kubedex’s Comparison of Google GKE vs Microsoft AKS vs Amazon EKS

At DivvyCloud, our software simplifies the job of securing Kubernetes clusters and workloads across public clouds including Amazon Web Services, Google Cloud Platform, Microsoft Azure, and Alibaba Cloud. If you’re interested in learning more, get your free trial of DivvyCloud or speak with a DivvyCloud expert today!


DivvyCloud minimizes security and compliance risk by providing virtual guardrails for security, compliance, and governance to customers embracing the dynamic, self-service nature of public cloud, and container infrastructure. Customers like General Electric, Discovery Communications, and Fannie Mae run DivvyCloud’s software to achieve continuous security governance in cloud and container environments (AWS, Azure, GCP, Alibaba, and Kubernetes).  First, our software performs real-time, continuous discovery of infrastructure resources allowing customers to identify risks and threats. Second, customers can implement out-of-the-box or custom cloud-native policy guardrails that identify and alert on violations. Third, we automate the enforcement and remediation of these policies.