An organization that is transitioning to a cloud provider such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) should immediately consider a shift towards a new model of security, namely continuous control and enforcement of secure configuration of cloud services. Importantly, this cannot be a one-time event but must be monitored and enforced constantly, as the software-defined nature of the cloud leads to frequent changes.
Misconfiguring a cloud database, storage asset, or search engine can have massive consequences, especially if it contains company-proprietary data. Just ask Capital One, whose recent misconfigured firewall led to a former employee of AWS using web application firewall credentials to obtain privilege escalation, which allowed access to one of their S3 buckets, and subsequent exposure of over 100 million users’ data. Facebook, earlier this year, exposed 540 million user records due to a misconfigured AWS S3 bucket from not one, but two different Facebook apps.
Other recent breaches:
- FedEx (unprotected cloud server)
- Verizon (open S3 bucket)
- Dow Jones (open cloud storage)
- Adidas (undisclosed cause)
- National Credit Federation (open S3 bucket)
- Australian Broadcasting Corp (open S3 bucket)
- Macy’s (undisclosed cause)
- GoDaddy (open S3 bucket)
Why are so many companies suffering misconfigurations?
Companies are lacking a holistic approach to security which opens them up to undue risk mostly caused by:
- Inexperienced users
- Failure to shift from outdated security models
- A lack of unified cloud visibility
- Unprecedented rate of change, scale, & scope
Overnight, we shifted from people who spent their entire lives in IT and security and understood the security process, to people who had never thought about security deploying applications in their infrastructure. The issues that are happening in today’s security realm, the headlines we are seeing, are not complex, these are standard misconfiguration issues.
The amount of people touching cloud infrastructure has dramatically changed. In the past, you had 40 people touching the infrastructure at any moment. Today it’s moved to 3,000 people deploying applications and making engineering changes to infrastructure. Enterprises have also moved from once a week production deployment to production deployments that are happening hourly. These continuous integration and deployment approaches leads to massive infrastructure, mixed with a large number of users and changes happening at once. Thus leading to loss of control and a self-service bypass that avoids the lessons learned from IT. Even if IT leverages a tool that provides alerts, they will still fall victim to alert fatigue. This is not a sustainable approach.
How to avoid these misconfigurations?
Companies need a security automation tool that enables them to change how they deploy and build applications entirely. This is not necessarily just a technology shift, but more of a cultural change. Everything an IT department does will need to change: how they deploy applications, what applications they build, how they learn from their customers, etc. All of that has to change because engineering teams have direct access to infrastructure and old processes aren’t working. Simple truth: the rate of change and the dynamic nature of software-defined infrastructure has outstripped human capacity.
Organizations need a security solution that provides the automation essential to enforce policy, to reduce risk, provide governance, impose compliance, and increase security across large-scale hybrid cloud infrastructure. Automation should take the pain out of making cloud infrastructure secure in a shared responsibility world by providing a framework for what organizations should be doing via a continuous, real-time process. By utilizing security automation, companies can stay agile and innovate, while maintaining the integrity of their technology stack and applying the policy they deem necessary to operate their business.
Core to a company’s solution should be an easy-to-use interface from which clients can manage their existing cloud infrastructure. At scale, policy enforcement cannot and should not be performed manually. Security automation can discover and automatically take action to address policy infringements or security issues (like an exposed Elasticsearch Database). It also allows for simultaneous offense and defense, resulting in increased innovation and a reduction of risk.
Few companies get remediation automation right at the beginning. It takes time to establish a set of remediation policies that work. The important first step to using remediation automation effectively is to make sure that all members of a company’s IT staff are committed to using remediation automation. Once the commitment is made, a company then develops appropriate remediation policies in an iterative fashion that fits the needs of the enterprise’s day to day operations.
Watch DivvyCloud’s 60-second video to learn how we help customers like GE, 3M, Autodesk, Discovery, and Fannie Mae stay secure and compliant.
DivvyCloud minimizes security and compliance risk by providing virtual guardrails for security, compliance, and governance to customers embracing the dynamic, self-service nature of public cloud, and container infrastructure. Customers like General Electric, Discovery Communications, and Fannie Mae run DivvyCloud’s software to achieve continuous security governance in cloud and container environments (AWS, Azure, GCP, Alibaba, and Kubernetes). First, our software performs real-time, continuous discovery of infrastructure resources allowing customers to identify risks and threats. Second, customers can implement out-of-the-box or custom cloud-native policy guardrails that identify and alert on violations. Third, we automate the enforcement and remediation of these policies.