For years, retailers, merchants and payment service providers have asked the question: Can virtualization be used in a PCI-compliant cardholder data environment (CDE)?
Requires Free Membership to View
Several qualified security assessors (QSAs) and auditors argued that the PCI DSS “one function per server” requirement (Requirement 2.2.1) rules out virtualization as an acceptable technology in a CDE. Other QSAs, auditors and architects posited the “one function per server” requirement could instead be met by installing one function per virtual machine (VM) server running on top of a hypervisor. But there was no official ruling from the PCI Security Standards Council, leaving the question up in the air.
When the PCI DSS v2.0 document went live on Jan. 1, 2011, one major part of that debate was settled: Yes, virtualization in a PCI-compliant CDE was acceptable. But that’s about all that was settled. The deeper PCI DSS virtualization questions of how those virtual servers and components should be installed, configured and managed were not addressed by the DSS in iteration 2.0.
In an effort to provide answers to those questions, the Council created a committee, called the PCI Virtualization Special Interest Group (SIG), whose membership includes representatives from a wide variety of organizations, including: financial services, cloud service providers, virtualization vendors and retailers. One of the top remits of this SIG was to author and publish a “white paper that defines and introduces common PCI use cases for virtualization” and a “mapping tool that provides detailed guidance on virtualization uses to meet PCI DSS requirements, including specific recommended, required and auditable controls.”
On June 14, 2011, this guidance was released as the Information Supplement: PCI DSS Virtualization Guidelines (.pdf). In this tip, we’ll analyze the PCI Virtualization SIG’s recommendations, and try to determine if it is feasible to implement virtualization in the cardholder data environment and remain PCI compliant.
PCI virtualization guidelines at a glance
If you are familiar with virtualization technology, go straight to pages 15 and 29 in the guidance
for specifics on what to do in the CDE. The SIG
guidance (.pdf) is divided into two main parts: a body and an appendix. In the body, a couple
of pages are devoted to level-setting what virtualization means. This is followed by approximately
five pages explaining the unique virtualization
security concerns and risks, and 10 pages of general recommendations for securing cardholder
data in mixed-mode (with virtualization) and cloud environments, and guidelines on how to asses
risk in these environments.
For virtualization-savvy implementers and assessors, the real meat of the guidelines is in the appendix. The 10-page appendix maps “virtualization considerations” and details “Additional Best Practices / Recommendations” to the PCI DSS requirements where virtualization has an impact. For example, Requirement 1 of the DSS pertains to firewalls and connections from the outside/public networks to servers and systems in the CDE. The Virtualization Guidelines add the Best Practice/Recommendation: “Do not locate untrusted systems or networks on the same host or hypervisor as systems in the CDE” to Requirement 1. If you are using virtualization in your CDE, read through this appendix carefully and check the processes and controls laid out in the guidelines against your own.
Despite that the guidelines call out specific best practices and recommendations in the appendix, the document is careful to state that they “do not replace, supersede, or extend PCI DSS requirements. All best practices and recommendations contained herein are provided as guidance only.” In other words, don’t expect this new guidance to end all of the debates between assessors and implementers. Though the guidelines provide much-needed expansion on how to implement virtualization securely in a CDE, the PCI DSS, which makes little mention of virtualization, remains the final word for PCI DSS compliance.
Hypervisor heartburn?
One concept, consideration and best practice repeated throughout the PCI Virtualization SIG’s
guidelines is that the VMs running on top of a single hypervisor are in a similar trust zone, and
all of them can be considered as inside the CDE. In other words, “if any component running on a
particular hypervisor or host is in scope for PCI DSS, it is recommended that all components on
that hypervisor or host be considered in scope as well.” (Emphasis mine.) Along that same line of
architectural thinking, the guidelines also recommend against having a less secure server VM
running on the same hypervisor as a more secure one because a “virtual component requiring higher
security could unintentionally be exposed to additional risk if hosted on the same system or
hypervisor as a virtual component of lower security.”
This point may sound fairly simple on the surface, but could have major implications in practice. One of the benefits of virtualized data centers is the flexibility to bring up multiple servers on a single VM and to move VMs from one hardware component to another when more (or less) processing power is needed. But, given the guidance that all components on a single hypervisor are in PCI scope, moving a non-PCI component onto that hypervisor could throw a CDE out of compliance or otherwise put cardholder data at risk.
The guidelines also point out some of the issues with inter-VM monitoring. Traditional network monitoring devices watch traffic that goes off of the hypervisor onto the wired or wireless network. But what about intra-VM traffic that passes from VM to VM on the same hypervisor? As the guidelines point out, virtual “firewalls and routers could be embedded within the hypervisor” to address the intra-VM “blind spot” issue, but this may mean purchase of new software.
A final point of note is the use of virtual desktop infrastructures (VDIs) and applications in the payment eco-system. According to the guidelines, these are “in scope if they are involved in the processing, storage or transmission of cardholder data, or provide access to the CDE.” For companies making extensive use of VDIs, a reassessment of the architecture is in order to ensure the PCI audit scope is correctly defined. To reduce scope, additional segmentation or even limiting access to certain devices may be in order.
It’s still your data
Though cloud and virtualization technology aren’t inextricably linked (you can have one without
the other), the architectural reality is that the two technologies are commonly deployed together.
The guidelines recognize this and provide some guidance on cardholder data protection in the cloud,
too. The guidelines refer to the three main cloud models: Infrastructure, Platform, and Software as
a Service. In all cases, they point out that the data protection component, the layer where the
cardholder data resides, is the responsibility of the cloud customer. And don’t assume that because
you’re using a PCI-approved service provider that responsibility for data is automatically out of
your scope . Whether you’re storing cardholder data
on-premise or in the public cloud, responsibility for protection of that data is yours unless you
have expressly and explicitly transferred the responsibility in a legally binding way.
Conclusion
Yes, in short, your organization can use virtualization technology and maintain a PCI-compliant
CDE, but getting that technology configured and deployed securely is non-trivial, and the
guidelines cover a lot of important ground. If your organization hasn’t done extensive research
into risk modeling virtualized environments, the guidelines are a great overview of how to
implement the correct controls to keep cardholder data secure. If you are a virtualization risk
veteran, you can probably skip over the introductory and background sections – but don’t ignore the
recommendations and best practices and Appendix A, especially the points about hypervisor
separation and data protection accountability in the cloud.
About the author:
Diana Kelley is a partner with Amherst, N.H.-based consulting firm SecurityCurve. She
formerly served as vice president and service director with research firm Burton Group. She has
extensive experience creating secure network architectures and business solutions for large
corporations and delivering strategic, competitive knowledge to security software vendors.
This was first published in June 2011
Security Management Strategies for the CIO
Join the conversationComment
Share
Comments
Results
Contribute to the conversation