Content By Devops .com
Twenty years ago, no one in IT could have predicted the monumental role that cloud computing would play in our everyday lives. To play off the words of Back to the Future‘s Doc Brown: “On-premises? Where we’re going, we don’t need on-premises.” As technology continues to evolve, though, security teams face greater challenges.
Infrastructure is no longer server rooms and boxes; it is dynamic, changing constantly to support fluctuations in demand. And moving to the cloud introduces new third parties into the digital mix. Although AWS, Google, and other cloud providers have a lot at stake when it comes to protecting customer data, security responsibility is never completely out of an organization’s hands. Compliance considerations are yet another complicating factor. In many cases, regulations around data access, privacy and residence were created for an on-premises world.
Looking to the future, companies want to maximize their investments in the cloud, while better addressing these security challenges. However, the rapidly changing nature of the cloud makes it difficult for security teams to fully understand the scope of assets they are responsible for at any given point in time, which is an absolute necessity in order for companies to truly be secure. After all, how can a company make sure it has no gaps in its cloud security if it can’t account for all cloud assets – especially when working with multiple cloud providers?
Security teams need to inventory their entire cloud infrastructure, understand what controls are in place and find ways to detect and mitigate any vulnerabilities or misconfigurations in a timely manner.
Understanding Cloud Workloads
When moving assets to the cloud, it can be incredibly difficult to try to inventory devices with dynamic IP addresses, which constantly change, compared with more static devices. The mass adoption of containers and microservices, some of which exist for just seconds before disappearing, make dynamic systems even harder to track.
One of the most important improvements that have come out of cloud growth is that everything has an API, which can be a massive advantage when inventorying assets. Because each cloud provider is self-contained, finding out when a new bucket or instance is created is certainly possible. Organizations that have advanced development capabilities can write and maintain their own applications to identify every new cloud instance for each provider – but doing this for multiple cloud providers and correlating cloud asset inventories with other security control data becomes a major investment of resources. The ability to access all resources in the cloud makes asset management a more achievable goal.
It is essential for organizations to carefully manage their assets and understand the security controls in place to know exactly what they cover. By categorizing their resources, companies not only better understand what they have, but can make informed decisions about exactly what additional security measures need to be taken. For example, vulnerability scanners may not be scanning critical workloads – or perhaps dedicated cloud workload protection platforms need to be deployed for certain cloud assets. It can be incredibly difficult for security systems to keep up; since cloud assets can be created so quickly, they need security systems tailored specifically to keeping them safe.
A security tool can only secure devices that it knows exist. Once an organization understands the assets it has, it can expand the coverage of any existing tools to devices that were previously unknown. When that coverage is fully adopted, asset management can help the organization find gaps in its coverage to identify areas for additional protection and the right tools to secure them.
Addressing Vulnerabilities and Misconfigurations
Once an organization establishes a comprehensive asset inventory and visibility, the next step is to address vulnerabilities, gaps in coverage or misconfigurations that are found. With the ability to spin cloud instances up or down at will, it is inevitable that instances may be forgotten and inadvertantly left unprotected. Since cybercriminals automatically scan public cloud infrastructure to exploit these types of vulnerabilities, public-facing cloud workloads are especially at risk.
Similarly, finding and addressing misconfigurations in cloud workloads becomes more complex. With the highly configurable nature of the cloud, mistakes happen often, and without proper security controls in place to detect these misconfigurations, attackers can easily access and steal data. Organizations must implement a process to continuously detect cloud workload misconfigurations.
A great place to start is the Center for Internet Security (CIS) Benchmarks, which provides objective security guidelines. By comparing data from each cloud provider with the rules in the CIS Benchmarks, security teams will be able to identify assets that don’t adhere to best practices.
The key to maintaining security in a modern cloud environment is to implement continuous discovery and interrogation, which requires automation. This includes the ability to create tailored alerts when certain conditions are met, and automatically deploy and change security controls as needed. Teams should set restrictive access controls by default so that non-approved users don’t have access to sensitive data. Any time a cloud workload deviates from policy, there should be a plan in place to automate the corrective action.
Computing technologies continue to evolve at a staggering pace. The computing environment 20 years from now won’t resemble even the most far-fetched, futuristic ideas today. The evolution of cloud adoption from justified skepticism to ubiquity taught us an important lesson: change is constant. It is incumbent upon technology and security professionals to constantly challenge the expectations of security solution coverage and adherence to controls to keep data and systems protected.