This article can also be found in the Premium Editorial Download "Information Security magazine: Seven Outstanding Security Pros in 2012."
Download it now to read this article plus other related content.
Most organizations already have started to use virtualization technology or cloud computing. Yet some still may be reluctant to move their mission-critical—tier-1 – applications to these relatively new environments. While the flexibility and cost benefits of virtualization are widely accepted, questions linger on how to adapt to new and different risks. Security and compliance top the list of organizations’ reasons to delay adoption.
Concerns about security in a virtual environment almost always begin with a study of the relationship between guest and host. That is just the tip of the iceberg. In the end a far more comprehensive view of risk management is necessary, which includes virtual machines (VMs), hypervisors, networking, storage and management. From configuration of software-based networking devices to software-based data centers, the process and procedures for managing resources are an important part of an assessment of cloud risk and compliance. An assessor not only will review configuration of the VM and hypervisor technology, but also look at how logical concepts such as port groups, resource pools and clusters are being managed in relation to data flows and business logic.
Let’s take a look at some of the ways virtualization and cloud computing impact compliance and how organizations can tackle cloud compliance issues.
Start with a standard baseline
A good strategy to manage cloud compliance is to establish a clear and transparent relationship with a cloud service provider. This can be facilitated by standards such as the SSAE 16 SOC 2 or ISO 27001. A framework that both parties can agree on makes it easier to get through the sections to focus on finding resolution in areas of concern. A provider that refuses to provide on-site physical assessments, for example, may not be acceptable to an assessor or a cloud customer. They might be concerned that despite what cloud providers say about identical controls in their many physical locations, which can be verified on paper, the human element of managing controls can still cause controls to drift out of place and warrant on-site audits.
Perhaps the easiest way to work through cloud compliance challenges with cloud providers is to approach them first at a technical level and in terms of how compliance has been handled in the past. An operating system has typically been brought into compliance by hardening it to a set of published guidelines. Systems within government must adhere to a set of documented security standards, such as the U.S. Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG), or publications from the National Institute of Standards and Technology (NIST). Systems within a commercial environment may need to be measured against completely different guidelines from the Center for Internet Security (CIS) or by an industry group such as the Payment Card Industry (PCI) Security Standards Council (SSC). Like the ISO and SSAE 16 standards, although with a regulatory authority overseeing their adoption, they can help clarify what exactly has to be done by a provider to achieve compliance.
Take control of continuous change
Let’s say that a Windows 7 system on hardware could be configured to meet the CIS Benchmark version 1.2.0 released on March 30. Move that same Windows 7 system from hardware to a VM on a hypervisor managed by a provider and an assessment of compliance for that system can be seriously different. Move it into a cloud environment and it changes again. The operating system itself remains almost identical, but an updated benchmark is required to account for the relationship with the hypervisor and then the systems used to manage hypervisor resources. Consequently, hardening takes on new and different meanings based on virtualization and how it is managed. Why? The flexibility and efficiencies of cloud mean new and different configuration options, which have different risks compared to hardware-based infrastructure.
For example, a hardware-based operating system will have configuration files that define storage. Migration to a virtual machine means the configuration files that describe the hardware move outside the system and onto the hypervisor. The boundaries for a VM are defined by those configuration files. In other words, a Red Hat Enterprise Linux system would normally use a configuration file in the OS (e.g. /etc/fstab) to determine which hardware file systems to mount when it boots. The OS file has to be very particular to equipment it was installed with (e.g. bus type, file system type, partition number). Virtualization, however, will make the same file in the OS generic to reflect the typical—or at least reduced—set of options available from the hypervisor. It then moves the hardware details to a file read by the hypervisor but invisible to the VM’s OS.
In terms of compliance, this means there has to be a shift in how to assess technical controls when looking at a virtual environment. A hypervisor should put a VM in a sandbox, isolated from other VMs. The sandbox is defined in part by how the hypervisor controls access to its hardware. A VM therefore should have no expectation that it can achieve direct hardware access by changing its configuration file; it should only see what it is provided. At a cloud provider level, this means a provider always should be validating configuration information that is uploaded with a VM before allowing that VM to run. A simple failure to validate a VM setting, such as allowing a VM to directly mount hypervisor storage, could potentially compromise other VM data on that hypervisor. Optical drives have little or no need to be connected to a VM in a data center environment, so they usually can be disabled. Likewise, attacks on serial and parallel ports do not work if those ports are disabled.
The key to this example is that a customer will need to know whether a provider validates VMs as well as disables features unused or unnecessary. It is the same concept as traditional compliance requirements—validate input and reduce the attack surface—but applied to the new processes and control points of cloud.
While the requirements in regulations do not yet spell out this level of technical detail for provisioning and de-provisioning systems, they do have language that is relevant and useful to assessors. The PCI Data Security Standard (DSS) version 2.0 states in Requirement 2.2 that a regulated entity must “develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.”
Cloud providers and vendors already are stepping forward to address the language of this regulatory requirement for standards. New security and compliance products, as well as detailed hardening guidelines, address the need for industry-accepted control requirements or recommendations. VMware’s vCenter Configuration Manager (VCM) is the type of tool that customers can request from their cloud providers to get a centralized and continual collection of configuration changes to infrastructure. A unified report will show systems that are out-of-sync with vendor hardening guides, or in violation of policy or regulations such as SOX, PCI DSS, HIPAA and FISMA. An emerging standard called the NIST Security Content Automation Protocol (SCAP), also supported by VCM, can even provide a detailed guide on current security configuration of operating systems and applications.
Establish trusted zones
Software-based networks also can be a sticking point for compliance. Segmentation between VMs, explained above in terms of the hypervisor, also is relevant to the configuration and maintenance of virtual switches. The migration of a VM from one hypervisor to another is often done in the clear for reasons of performance and availability. In other words, the VMs are sent by providers without encryption, so anyone with access to the network would potentially intercept and view or modify data. The memory contents of a VM could be viewed or altered. Confidentiality and integrity both are at risk when this is the configuration.
To reduce the risk of these attacks, the management-related traffic of the hypervisor should be set to isolated and dedicated networks that are non-routable (i.e. no layer-3 route to other networks). The port group should be on a dedicated VLAN. The virtual switch can be shared but the port group VLAN should never have any other VM connected. This also allows for monitoring for that VLAN ID on other port groups. Another option is to further separate the port group with a management-dedicated virtual switch and to monitor the switch for non-management traffic.
Taking this one step further, a management network should be set up at a cloud provider to restrict access only to known endpoints. Although requirements such as PCI DSS do not explicitly state this, the PCI Security Standards Council (SSC) in 2011 made it clear with the publication of its virtualization guidelines that reducing the management interface attack surface is a best practice. An attacker is likely to target the network to gain privileged access to a cloud provider’s management interface.
That is why the management layer should be protected by giving it a dedicated VLAN for the management port group on a shared virtual switch. Other VM traffic may be allowable on a switch if the port group for the management VLAN is restricted only to management traffic. An additional level of security, such as stateful packet inspection and intrusion detection monitoring, will help further segment the traffic and tends to be required under some regulations such as PCI DSS. An even better step to segment management communication is to move the management VLAN to a dedicated virtual switch that does not allow for any non-management port groups. The network segment also should not be routed except to other isolated and protected management networks.
Another important step in overcoming cloud compliance challenges is related to the human element; The cloud provider’s administrators and users must be trained on policy and procedures. SSL certificates not only have to be carefully managed and secured, but the administrators themselves also have to be vigilant about verifying SSL certificates before entering their passwords. Impersonation of a VMware vCenter Server or vCloud Director with an incorrect SSL certificate would force the client software to display a security warning. An administrator might override the warning if he or she isn’t properly trained to report it and/or investigate the error as a security incident.
Compliance as a cost-saver
One of the more interesting effects of cloud environments is that, when engineered properly, they actually can reduce compliance costs while improving security coverage. Anti-malware controls are an excellent example of how automation and consolidation reduce overhead. There is no doubt that antivirus is required under practically every regulation; from SOX to PCI DSS, there is a need to prevent unauthorized code. Requirement 5 of PCI DSS v2 states simply, “Use and regularly update antivirus software or programs.” Finding viruses with an ever-increasing blacklist is a resource-intensive process. Software to catch viruses tends to disappear into the underutilized capacity common on dedicated hardware. A virtual environment, by comparison, makes far more efficient use of shared hardware; however, VMs can end up performing scans in competition with each other out of a limited pool of resources.
Hypervisor companies and their antivirus vendor partners are working to address this problem. For example, VMware’s vShield Endpoint offloads work from VMs to a shared and dedicated security VM on the same host. Centralized control and elimination of redundant load means a dedicated agent per VM is no longer necessary for virtual environments to achieve compliance requirements. The increased efficiency, while performing the same or better level of protection and compliance, might seem familiar to those wanting to move to cloud.
Consider how taking this newly centralized model of compliance in the cloud can affect the storage footprint for each VM versus a traditional anti-malware agent. The traditional agent, plus several signature files for rollback capability, often is several GB in size. For the sake of argument, run a quick calculation for 1,000 VMs on 10 hosts with an anti-malware footprint of roughly 5 GB per host and SAN storage for the VM at $5K per TB:
(1,000 VM) x (5 GB per VM) = 5 TB
5 TB x ($5K per TB on SAN) = $25,000 in host-based antivirus storage space
Next, for comparison, run a calculation for a host running anti-malware on behalf of the VMs. The host-based anti-malware is likely to be larger than a VM anti-malware agent, so 7 GB instead of 5 GB gives the following result:
(10 Hosts) x (7 GB per host) = 70 GB
.07 TB x ($5K per TB on SAN) = $350
The cost savings for cloud compliance using a host-based anti-malware model shows storage is reduced more than $24K (or $24 per VM) and saves 4 TB. Network resource benefits also are possible. The hypervisor-based solution downloads malware signatures once for all the guests on a host; 10 systems have to communicate updates and events instead of 1,000. Factoring in keep-alive packets, scan start/stop status and signatures for 1,000 systems is roughly 2 MB of overhead that could be eliminated from the network. A carefully planned and controlled cloud provider environment may therefore find significant financial benefits when properly addressing the challenges of cloud compliance.
Today, organizations are eager to take advantage of the cost efficiencies of cloud computing, but they need to ensure the move won’t jeopardize their compliance efforts. Emerging standards and improved solutions from vendors are helping to guide customers and their providers to comply with many governmental and industry regulations. In some cases, it is proving to easier to be compliant in the cloud than ever before.
Davi Ottenheimer is president of security consultancy flyingpenguin and author of the new book Securing the Virtual Environment: How to Defend the Enterprise Against Attack. He is a QSA and PA-QSA for K3DES with more than 17 years of experience in security operations and assessments, including a decade of leading incident response and digital forensics. Davi formerly was global communication security manager at Barclays Global Investors and a “Dedicated Paranoid” at Yahoo responsible for digital home, broadband and mobile security. Send comments on this article to email@example.com.
This was first published in November 2012