A little more than two months into 2019 and for the fifth time we are writing about another massive company data leak and another misconfigured Elasticsearch server. Reminiscent of the AWS S3 bucket leaks in 2018, Elasticsearch servers are proving a problem for companies to configure correctly.
What happened this time?
According to an article on Yahoo Finance, an exclusive Dow Jones & Co. watchlist of more than 2.4 million high-risk clients was unintentionally exposed due to a misconfigured and unsecured Elasticsearch database hosted on AWS. “Used by eight of the world’s ten largest, global, financial institutions Dow Jones Watchlist is statistically proven to be the most accurate, complete, and up-to-date list of senior PEPs (politically exposed persons), their relatives and close associates,” Diachenko wrote. The watchlist contained the identities of government officials, politicians, and people of political influence in every country of the world.
Security researcher Bob Diachenko, found the exposed watchlist in late February after a third-party company left it open without a password. This security lapse adds to a growing list of organizations in 2019 that have left Elasticsearch servers unprotected, exposing a lot of proprietary data:
DivvyCloud’s CTO, Chris DeRamus, told SiliconAngle “Dow Jones suffered a similar cloud storage misconfiguration two years ago that exposed the information of 2.2 million customers. It’s concerning that with this new exposure, Dow Jones did not take proper steps to strengthen its security posture. Organizations must realize the importance of balancing their use of the public cloud, containers, hybrid infrastructure and more with proper security controls.”
Why are so many companies suffering misconfigurations?
Overnight, we have gone from people who spent their entire lives in IT and security who understood the security process, to people who had never thought about security deploying applications in their infrastructure. The issues that are happening in today’s security realm, the headlines we are all seeing, are not complex, these are standard misconfiguration issues. Even still, this is not new. We went through this 15 years ago when we got used to building out data centers and server farms. This is just a new set of lessons, you just have to learn a new approach to it. Problem is, none of that information transferred because the shift to the cloud wasn’t driven by IT; it was driven by engineering teams.
The amount of people touching cloud infrastructure has dramatically changed. In the past, you had 40 people touching the infrastructure at any moment. Today it’s moved to 3,000 people deploying applications and making engineering changes to infrastructure. Enterprises have also moved from once a week production deployment to production deployments that are happening on an hourly basis. These continuous integrations and continuous deployment approaches leads to massive infrastructure, mixed with a large number users and changes happening at once. This, in turn, leads to loss of control and a self-service bypass that avoids the lessons learned from IT. Even if IT leverages a tool that provides alerts, they will still fall victim to alert fatigue. This is not a sustainable approach.
How to avoid these misconfigurations?
The DivvyCloud approach enables organizations to change how they deploy and build applications entirely. This is not necessarily just a technology shift, but more of a cultural change. Everything an IT department does will need to change: how they deploy applications, what applications they build, how they learn from their customers, etc. All of that has to change because engineering teams have direct access to infrastructure and old processes aren’t going to work. Simple truth: the rate of change and the dynamic nature of software-defined infrastructure has outstripped human capacity. If companies get a list of a thousand problems, even with 100 people tasked with resolving them, problems either disappear, move, or are replaced with even more significant issues. Enterprises need to be able to deal with faults in real-time.
Organizations need a security solution that provides the automation essential to enforce policy, to reduce risk, provide governance, impose compliance, and increase security across large-scale hybrid cloud infrastructure. Automation should take the pain out of making cloud infrastructure secure in a shared responsibility world by providing a framework for what organizations should be doing via a continuous, real-time process. By utilizing security automation, companies can stay agile and innovate, while maintaining the integrity of their technology stack and applying the policy they deem necessary to operate their business.
Core to a company’s solution should be an easy-to-use interface from which clients can manage their existing cloud infrastructure. At scale, policy enforcement cannot and should not be performed manually. Security automation can discover and automatically take action to address policy infringements or security issues (like an exposed ElasticSearch Database). It also allows for simultaneous offense and defense, resulting in increased innovation and a reduction of risk.
Interested in learning more? Speak with a DivvyCloud expert today!