The attack is different from Magecart’s past hacks, which were targeted attacks. This is a new “spray and pray” technique, with Magecart hackers altering the code of countless sites with no credit card processing at all as well as sites with an e-commerce function. The hope of casting such an extremely large net is that the amount of compromised sites that process credit cards will outweigh the compromised sites that have no e-commerce function.
The S3 Bucket Problem
Amazon S3 buckets are private and thus secure by default. Trouble arises when companies actively change those permissions, either somewhere in the development process or when they employ a third-party contractor for additional cloud work. These misconfigurations have caused plenty of problems before, but those leaks usually led to the exposure of personally identifiable information, such as huge databases of usernames, passwords, birthdays, and social security numbers. This is because those misconfigurations typically give read permission to interlopers, but not the ability to write new code. The Magecart hackers are scanning for misconfigurations that allow for both read and write permissions, and now they know over 17,000 vulnerable domains.
However, not all of those 17,000 domains are at risk, because not all of them process credit card transactions. It’s also unclear how many actual S3 buckets are affected, because multiple domains can link back to the same one. So the domains that are actually going to be drastically affected by the hack are those that are both linked to misconfigured S3 buckets and process credit cards.
How are these attackers able to breach company storage containers?
Often times the storage container configuration is incorrect. The created container permissions may have been too broad which allows anyone to access the data. Again, these containers may have been serviced by people who aren’t familiar with security, thus the developer who created the container was unaware of how to properly secure it, or it was something as simple as an oversight. For example, let’s say a developer was troubleshooting an issue that was causing an application to fail and suspected the storage container access was to blame. The developer may have tweaked the storage container configuration leaving it open to the public, and as the application began working again, moved on to another project. Now that company has an exposed storage container. It may not have even been the developer’s fault as someone else may have altered the container’s configurations at a later date for any number of reasons. So many organizations are made vulnerable because a lot of them don’t have processes that prevent insecure software deployments.
Who is Responsible?
The important thing to remember in the wake of these massive hacks and leaks over the past few months is that the fault does not lie with the Cloud Service Providers, but rather with the cloud customers. The CSPs are responsible for securing the underlying components of cloud services. For example, the default settings on Amazon S3 buckets are private, ensuring security. The customer is responsible for how they use those cloud services, including properly configuring identity and access management (IAM), storage and compute settings, threat analysis and defense, and the security of the application and data processed and stored on the cloud. When the customer makes all these changes and utilizes these services, they are the ones responsible for tying up all the loose ends.
The problem is that on an enterprise-scale, the self-service access to the cloud creates a gigantic playing field that far outstrips the human capacity to referee. To this end, CSPs offer a plethora of security controls. For example, Amazon AWS provides more than 30 different cloud-security related services. While these controls are vital in secure cloud configurations, just turning them on does not come with any guarantees.
How Do You Avoid Exposing S3 Buckets?
Amazon S3 buckets are private by default and can only be accessed by users that have been explicitly given access. Again, by default, the account owner and the resource creator are the only ones who have access to an S3 bucket and key, so someone has to actively misconfigure an S3 to expose the data.
Amazon has been actively working to help companies avoid breaches caused by misconfiguration. In November 2017 AWS added number of new Amazon S3 features to augment data protection and simplify compliance. For example, they made it easier to ensure encryption of all new objects and monitor and report on their encryption status. They have also provided guidance on approaches to combat this issue, like the use of AWS Config to monitor for and respond to S3 buckets allowing public access.
As a most basic first step to avoiding S3 bucket leaks, take advantage of the native AWS capabilities. Ensure that you are always purposefully using AWS S3 access policies to define who can access the objects stored within. Ensure your team is well trained to never open access to the public, unless absolutely necessary, as doing so can result in the exposure of PII and other sensitive data. And help prevent unauthorized access to your data by taking advantage of capabilities like AWS Config.
The challenge is that many organizations struggle to adopt and enforce best practices consistently, and only 100% consistency can ensure protection against a breach. This is why an investment in cloud operations is a vital additional step.
How does DivvyCloud help customers fix the problem?
DivvyCloud’s customers leverage bot automation to remove the public permissions from the access control list where necessary. Customers can also leverage bucket policies in place of access control lists for the finer-grained access control. DivvyCloud’s bot automation prevents data breaches by finding, alerting, and remediating misconfigured storage containers way before vulnerabilities are exposed.
It’s important to highlight one of the things DivvyCloud does well, is not only to flag the problem in real-time but to give customers an exact pointer to where the problem is. If somebody were to tell you “there is an open S3 bucket” but didn’t narrow down to a granular level, where would you start? This is why DivvyCloud doesn’t simply alert that there is an open S3 Bucket, we take action and inform the customer to exactly which bucket in which account.
In the end, the way to avoid exposing data in cloud storage containers is really common sense: Don’t ever configure the storage containers to be exposed to the public. Organizations need to learn about security configurations while evaluating their public cloud options or pay someone else like DivvyCloud, to do it for them. Otherwise, it’s only a matter of time before they become a headline.
Install DivvyCloud today with a free 30-day trial and make these storage container misconfigurations a thing of the past (now and forever).
Watch DivvyCloud’s 60 second video to learn how we help customers like GE, 3M, Autodesk, Discovery, and Fannie Mae stay secure and compliant.
DivvyCloud minimizes security and compliance risk by providing virtual guardrails for security, compliance, and governance to customers embracing the dynamic, self-service nature of public cloud, and container infrastructure. Customers like General Electric, Discovery Communications, and Fannie Mae run DivvyCloud’s software to achieve continuous security governance in cloud and container environments (AWS, Azure, GCP, Alibaba, and Kubernetes). First, our software performs real-time, continuous discovery of infrastructure resources allowing customers to identify risks and threats. Second, customers can implement out-of-the-box or custom cloud-native policy guardrails that identify and alert on violations. Third, we automate the enforcement and remediation of these policies.