How to evolve your compliance program as technologies and mandates change

How to evolve your compliance program as technologies and mandates change

How to evolve your compliance program as technologies and mandates change

Date: Sep 14, 2010

This video describes how organizations can effectively interpret particular requirements from regulations such as HIPAA and PCI DSS, as well as implications these interpretations have on compliance activities, administration and auditors.

Topics include:

  • Change is constant (2:12)
  • Change in regulations (4:45)
  • Evolution of technology (7:01)
  • Virtualization and compliance (11:40)
  • Cloud computing and compliance (26:16)
  • Testing (28:26)
  • Encryption requirements (31:50)

Read the full transcript from this video below:  

Please note the full transcript is for reference only and may include errors. To report an error, contact editor@searchsecurity.com.   

How to evolve your compliance program as technologies and mandates change

Richard E. Mackey, Jr.: What I'm going to be talking about today is how to evolve your compliance program as technologies and as mandates change. What that means is that there are a number of different changes that occur inside organizations. The regulations can change. Technology can change, etc. This presentation is all about how your compliance program needs to adapt to those types of changes.

The first point is that change is, in fact, constant, so if your compliance program is a one-and-done, if you think you're done as soon as you comply with the first regulation or contract instance, you're going to be sadly mistaken, and there's a lot of investment that needs to go into a program that is adaptable.

The next is that you have to recognize that there's always going to be changes in regulations, and the changes in regulations come about because there'll be new standards published. There'll be new interpretations, etc.. So you need to be able to adapt to those as well.

Technology actually forces organizations to change because when you apply technology in a technical environment, the technology, the implications of regulation or specification on that technology may have to be interpreted differently. Miraclization is the item that I bring up over and over again throughout this presentation.

Cloud computing also poses real challenges for compliance. And then, finally, I'll talk about two other areas that have to be considered when you're thinking about compliance. That is encryption and testing and how those requirements are either evolving or springing up, depending on the regulations that you're having to comply with.

As I said, compliance with any regulation or contract requires adaptation. I make this distinction between regulation and contract because it's an important one. PCI is a contract. It's a contract even though if you read the PCI disk security standard, it actually says that any organization that handles credit cards at all, processes, stores or transmits credit cards, has to comply with this standard.

The fact is, I could write that you have to give me money on a piece of paper as well, and that would force you to give it to me once you signed it and agreed to those terms. The fact is that PCI is a contract between parties, say a brand or an acquiring bank and a merchant.  When that contract exists, then the PCI DSS comes into effect. It doesn't apply to organizations that don't have such a contract in place unless it means something in the business world to them.

Regulations, on the other hand, you don't really have an option to comply or not, so that is the, in fact, in a way, it's like a contract with the government or regulatory agency, and you have to comply. You need to understand the difference between contracts and regulations.

Regulations and contracts change from year to year, so the PCI DSS, for example, I'll talk about some of the changes that happen there. Organizations change and that may affect your compliance, so if you merged with another company or acquired another company or were acquired, or you merged two organizations or two businesses within your organization, that could affect your compliance.

Business risks change from year to year. If you become the target of an attack, for example, if you become more high profile, maybe your risks change. You may change some aspect of your business. For example, you might provide a service that you didn't before. That service may, in fact, increase your risk for both operational risk and compliance risk.

The technologies change and a good deal of this presentation is going to deal with that. Interpretations of rules change, and we'll talk more about how auditors are actually changing their opinion about how technology is now ruled by various instances. And then auditors' processes change. In fact, if you look at PCI and what's happening in that space right now, is that the PCI council is now taking a lot of time to do quality assurance on the auditing process, which is forcing more scrutiny on the auditors themselves and the processes they use. Therefore, you might have gotten a free ride last year; you won't get a free ride this year. Well, at least that's the thought.

An example regulatory or contractual changes: PCI's changed in a number of ways between 1.1 and 1.2. October 2008 was the 1.2 release date, and they changed the requirements for encryption on wireless, in particular. Web use was acceptable before, and now it has to be phased out by a certain date, and there's a requirement for quarterly scans of all your Internet addresses, even if they're not inside what's called the card-holder data environment. It used to be that you might interpret the rule to say that you only had to scan these external addresses that led to your card-holder data environment-- the systems that processed that data. Now they say you have to scan all external addresses and look for any instance of an instance of an issue that would not pass the requirements of the data security standard.

The Massachusetts Identity Theft law, that says that you have to encrypt all portable devices. It also requires, if you have personal identifying data, and they have a definition inside the Massachusetts Identity Theft law that states exactly what that means; it could be a Social Security number and a name, a credit card or bank account number and a name and so on. If you have that on a portable device, it has to be encrypted. You also have to have strong governance processes in place and a whole written information security program. You have to have risk assessment processes in place and policy requirements and policies in place as well.

The interesting thing is that these are all similar requirements to PCI and HIPAA, but many organizations who handle Massachusetts resident data have never had to have such a formal process in place. That could be a real challenge for them. Then HIPAA, we haven't seen many changes in the regulation itself, but we've seen an increase in some audits in healthcare organizations and covered entities, as they call them in HIPAA parlance. That increases the scrutiny and increases the intensity in which people are going after compliance. That has not only affected covered entities like hospitals and insurance companies, but on service providers who are considered business associates in that world.

Also, we talked about technology changes. Virtualization, so you're combining systems. If you combine a whole set of systems that were once disparate and separate systems down to a single hardware, you have to maintain logical separation between them to meet some of the regulatory or contractual requirements. The question is: can that be done? It’s forcing auditors, like QSAs, qualified security assessors in the PCI environment in particular, to look at the lines that are drawn between virtual systems and trying to determine whether those are adequate to define a separation in functionality, for example.

I’ll talk a lot about that. Then cloud computing is also an interesting issue because you've got this flexible computing environment in which you can instantiate an entire environment and then de-instantiate the environment. The question is: can that environment ever be capable of meeting the requirements of, say, the PCI DSS?

Also, you've got sharing at other levels. If you look at SANs (storage area networks), you see virtual disks out in the network that are actually spanned locations. That can be a real challenge because the question is if these storage area networks, these devices, are shared between multiple systems that have to meet standards and have to be physically, or are supposed to be at least logically separate, does that linkage in the storage actually create one single system? That's an issue that you should look into.

Another issue that has blurred the lines between organization and logical separation is federated security. So if you're coming in to a service provider, for example, but you're being authenticated by your home organization, where's the trust relationship between those two organizations and what impact does that have on the trust worthiness or the compliance of the entire environment when you start separating who's authenticating, who's providing the service? These are all issues. All four of these technological changes have to be interpreted under the requirements associated with each of the regulations or contracts that you have to comply with.

Let's just look specifically at some language. If you look at the language that comes right out of the PCI DSS, the data security standard for the payment card industry, you'll see that the scope of the CDE, the boundaries that I talked about of the card-holder data environment, are defined as follows: it's an area of computer system network that possesses cardholder data or sensitive authentication data, and those systems and segments that directly attach or support cardholder processing storage and transmission.

What we've got here is a document that's telling us that the environment is defined by a computer system network. So right there, we say, "Okay, well, what is a network?" A network is a set of wires and routers and so on, and firewalls and a set of systems that are connected to them. Then what it says is that adequate network segmentation, which isolates systems to store processes and transmit card-holder data from those that do not, may reduce the scope of the card-holder data environment and thus the scope of the PCI DSS assessment.

What we're saying here is, if you can put some network segmentation in place, life is good. But the interesting thing is, in a virtual environment, there are networks in the virtual environment. There are physical networks. There are physical systems. There are virtual systems. The question is going to be, "How do you interpret this language given the fact, is it a one-to-one correspondence between a system in the virtual world and a system in the cardholder data world?" You can force some auditors to come to some pretty tough decisions, and you don't want auditors to make decisions, let me tell you.

Then, to go along with this, the system components that you're going to find here is any network component server application included in or connected to the cardholder data environment. In the virtual system world, if you're working with real, physical systems, this isn't that hard to interpret but as soon as you start throwing the idea that these systems are and multiple systems can be on the same box and the network could be defined in the virtual environment, it becomes a little bit more difficult to interpret.

How does it affect? Virtualization affects what is considered the component boundaries as we were saying, so system boundaries, network boundaries, administrative boundaries. When you have an administrator who's going to administer the lower level system, the actual hypervisor for example, are they considered an administrator automatically, the personal systems that reside on top of that.

Also, it can affect the monitoring effectiveness. You might have a device installed that's supposed to monitor networks. What does that do to the virtual networks? Let's just look at, just quickly, at what virtualization means. You've got hardware at the bottom. You've got this one physical box and then you've got a virtual machine manager, a hypervisor layer on top of that and then you've got several guest operating systems sitting there.

 Now, the way these systems are managed and the way these systems run is that, according to each of the applications that are in there. These are all separate systems. The question is: can we treat them that way in the world of compliance? Here's one problem that occurs. If I've got some sort of network appliance sitting out on the network that's watching for problems and monitoring the network for traffic that might be on board, you've got these various systems in the virtual system that are communicating with each other where the traffic never goes out on the physical network and, therefore, is never seen by that appliance.

The issue's going to be, how do you implement that kind of monitoring inside? Now there are devices and products that you can install to do this, but the problem is, if you're making a move from a physically defined environment to a virtual environment, you have to take these kinds of challenges into account.

Are the virtual system boundaries equivalent to the hardware system boundaries? Now, it's going to depend upon configurations. It's going to depend on the purpose of the system that you deploy. It's also going to depend on the administrative model. Who is responsible for the various components? Is one group responsible for administering the actual configurations of the individual systems and another for the VMM or hypervisor?

It depends on what enemies have access to the application data, the system configuration, network segments and hypervisor. I'll talk more about some of the practices you can put in later but every one of these layers of extraction need to be taken into account when you're building an environment where you need to comply with specific requirements out of, say, PCI DSS.

Then is the cardholder data encrypted or protected appropriately? Here's the deal. It's supposed to be encrypted when it goes out over the network. What's the network? Is the network that little network that's a virtual network that's inside the system that never goes out over the wire, has it really gone out over the network? These are problems or question you have to answer.

One other problem that you have to answer when you're talking about virtual environments is that you can establish a virtual system relatively quickly, put it away and then bring it back at some later time. What happens to the information that resided in memory on disk, etc., and virtual memory, to that data? Can I start up a program and then start winding my way through memory to look for information that was left over from the last instance?

These are not, in many cases, dedicated systems that are running these virtual environments. So you have to look at what implications they have, and I've got to tell you that the audit processes associated with a lot of these standards, like the data security standard, are not real well equipped to even probe into these questions. I don't know if that's good or bad for people who are being audited, but the issue is if you instantiate a system and then take it away, what's left over? There isn't anything in the PCI DSS that's actually going to force an auditor to look at that, but it's something that you should be thinking about.

Our network requirements particularly firewalls, implemented in the virtual network. Imagine you actually want to set up an entire PCI CVE inside the virtual system and there was only part of that virtual environment. I'm not recommending this. I'm just saying, hypothetically speaking, if you wanted to do something like this, then you would have to implement firewalls inside that environment and then cordon off that network from the rest if you wanted to establish the segmentation that you talk about in the description of the cardholder data environment. Now we have to start thinking about how you're going to implement network mechanisms inside the virtual system.

Now another issue is that, in the physical environment, when you do builds for systems and you have standard configurations for systems. If you look at a lot of these, particularly PCI DSS, but other regulations as well, what you want to do is have a controlled environment where you know exactly how every system in the environment is actually configured. In PCI DSS, it actually has an auditor look for configuration standards for every single one of the systems. Meaning, you know exactly what version of every operating system, you know exactly how all the configurations options are set up in every system.

The question is going to be, are those rules followed as closely or strictly in the virtual environment as they are in the physical environment? Then it goes on, are they hardened appropriately? Are they configured according to standard? And are they monitored to make sure that they won't fall out of those configurations?

Now you think about it, if you look at the typical configuration monitoring programs, then you'd have to install something like Tripwire on each one of the virtual systems or you'd have to scan those virtual systems to ensure that they are, in fact, in the configuration that you set for them. The point is that there are no short cuts here. You still have to do all the things in the virtual environment that you had to do in the physical environment. You can't just disregard any of the rules because everything's running on the same machine.

That first issue is basically what I just said, that configurations need to be monitored. Now another issue has to be that you have to pay close attention to, you have to ask yourself how administrative access is granted for control associated with each of the systems in your virtual environment. What virtual environments do is they provide additional layers of traction that you have to worry about. In the past, you only had to worry about the operating system and the applications that resided in those operating systems, and you also had to worry about the network. In certain cases, you wanted certain standards.

For example, you can look at ISO 27,002. It recommends that you have separate control over network administration and system administration. Do you have appropriate controls and monitoring over the administration of the hypervisor or EMM, the operating systems, and the network components in your environment? If you don't have those, then you're going to fall short of being compliant with the standard.

Then, what kind of testing do you have in place? If you have to run penetration testing or application testing across the various systems in your cardholder data environment, whatever environment, whatever compliance environment you've established, have you met those requirements on every one of the virtual systems? This might become difficult if, in fact, you create systems and then put them away, as well. This poses a real problem.

Another part of every compliance program has to be vulnerability management. Virtualization also creates some challenges here because now you've got vulnerabilities in these systems that didn't exist before. In every device that you instantiate inside the virtual environment needs to be taken care of according to the standard. You've got to look for bugs and address patches in the hypervisor and all the operating systems you're running in your virtual environment, any of the network devices or virtual networking components that exist inside your environment, then the applications as well.

The real new component here is the fact that the hypervisor is there. The additional challenge is the fact that these all exist inside this virtual environment and that you need to track and deploy in a way, to these various systems in a way that meets the standard and is actually effective in a virtual environment.

The other thing is you need to have logging and audit trails at all those levels of abstraction. If you make a change to the configuration of the hypervisor, that has an effect on the compliance on the security of all the system software that runs on top of it, you need to be able to track that. You need to be able to protect those audit trails as well.

Another issue is identity and access management. Virtualization provides a real challenge in defining access control. For example, if you're supposed to maintain a different administrative group for maintaining this set of systems and then this set of systems, but you have a single hypervisor that's being managed by a single organization, or you're supposed to separate network and system administration from one another, these virtual systems pose a real problem. This puts pressure on any kind of identity and access management program that you have in place.

If you have all this in a centralized system like Sun Access Manager and so on, or Oracle or any one of these, you have to take into account the fact that you are deploying two virtual systems and dealing with virtual systems, as well as hard systems, and you have to deal with all the different OS types that you're running inside your environment as well. It's really not that much of a difference from dealing with this in a regular environment, but you have to make sure that you’re not crossed up by the fact that you've got these different levels of abstraction and you've got identities in each one of these levels.

Some rules of thumb to live by in establishing virtualization or using virtual technology: what you want to do, if at all possible, is not try to split your world down inside the virtual systems. Establish your hardware boundaries to be consistent with the functions that are going to be provided inside your cardholder data environment or whatever compliance requirement. Don't try to draw a line to establish your compliance boundaries inside a virtual system.

In fact, try to, as much as possible, even have the functional division. For example, if it says that a security service needs to be located on a single system, try to run all your security services in a single virtual environment, and it might run across multiple systems. Try to run naming your directory services and supporting services in another environment. Use hardware networking where possible to define whatever the boundaries are, so don't rely on the virtual networking inside a system.

Tightly control access to all the accounts to virtual systems to try to maintain every more discipline in those environments than you would on a single physical system. Ensure the data is obscured when images are removed, and one way of dealing with that is to use the virtual environment for the same purpose each time, to not mix and match. It may be attractive to say, "Create a computing environment where you can just instantiate, then remove, instantiate, then remove any different function." The problem with that is when you have specific requirements for data protection, you might be leaving traces behind that would be difficult to explain to an auditor.

Follow the same configuration guidelines that you would in the physical environment. Make sure that you have configuration standards for all of your virtual systems. Separate responsibilities for systems and the virtual system administration if you can. Ensure that the development and test don't use production images of data. This is another problem, in fact, temptation, that occurs in virtual environments is that once I have an entire virtual system established, I might be tempted to use that entire virtual system to run through quality assurance. I've got customer data, credit card data, healthcare data, banking data, whatever it is, and I move it off into the test because the entire instance can be moved over and tested and tried. Now what you've done is broken the rule that you use production data in test.

Develop detailed procedures for instantiation of these systems and access controls around the images so that you know that only privileged administrators can get at the images of these systems because they actually can be stopped in midstream and taken off, paused as if you had closed a laptop, and then moved on. It’s got all this data and everything inside it.

Then monitor configurations of virtual systems as you would hardware. Establish whatever tools you need, to know exactly how any of these systems has been changed and when they changed. Don’t back off just because they're fake virtual systems inside of a system.

I talked briefly about cloud computing, and one of the challenges, clearly, the challenge with cloud computing is that it is the ultimate in creating an environment and then de-instantiating it. Creating it, dropping it, creating it, dropping it. It's very difficult to find exactly what the boundaries are or what the requirements are associated with that system. The problem with that is that the cloud systems are recycled and given to anyone who needs them. The memory contents aren't guaranteed. The boundaries of the system aren't guaranteed. The configurations of the system may be looser.

If you look at the contracts being provided by cloud computing vendors like Amazon, they're not giving you any real statement about exactly how these systems are going to behave and what happens when your system comes off of theirs and then somebody else's goes in. What that means is that it's virtually impossible to comply with a regulation like PCI, and it also, just from an operational security standpoint, makes it difficult to recommend instantiating real, any kind of sensitive data environment inside one of those environments.

Now, if you provide your own cloud computing environment, what you want to do is make sure it was always used for the same basic purpose. It might grow or shrink but it's not going to change in its underlying use and the requirements that need to be met. The interesting thing is that where there's an opportunity, someone's going to step in and try to fill it, so watch this space. It could be that someone figures out a way of putting those guarantees around the service of cloud computing and having all the requisite guarantees that an organization wants to put a compliant environment out there. You hear a lot of talk about it but I have yet to see anyone who can actually argue that a cloud could actually be PCI-compliant.

Testing, the two topics to finish up, there are two areas that come up at all compliance areas. One is testing and the other one is encryption. I'll just handle those. PCI requires several types of testing. In fact, this has expanded since the last release. Quarterly, an approved scanning vendor has to scan your environment and that means in all external IP addresses and then they have to get it to pass. That means that you can't be exposing any unsecure protocols and that there can't be any obvious bugs, according to the scanning procedures.

You also have to do internal scans of your environment. That doesn't have to be done by an approved scanning vendor, a specific organization that's been approved by the PCI council. But it has to be done, nonetheless, multiple times per year. Then there has to be application vulnerability analysis and code reviews done. The question's going to be, whether you're trying to comply with PCI or trying to comply with any other regulation, what type of testing is actually necessary. What you need to do is you need to look at the standard or at the regulation or contract and know how to meet this requirements. PCI is fairly straightforward, very prescriptive.

On the other hand, if you look at HIPAA, it's nowhere near as prescriptive. It's harder to determine what is necessary. You can look at best practice and say, "Well, you know, I'm going to do testing once a year and that will suffice," but what all of these regulations have in common is that you are supposed to put a risk assessment in place and determine what would be a reasonable set of tests for, in fact, we'll talk about encryption, reasonable encryption to put in place to mitigate the risk of exposure.

The first thing to do is to look at for HIPAA or for FFIEC or for any of the requirements, is to understand what the specific prescribed requirements are and then what would be necessary in addition to that to fulfill your responsibilities in assuring that your systems and your data is well protected.

HIPAA refers to testing of contingency plans. The interesting thing about HIPAA is a lot of people think of HIPAA as being a confidentiality rule, the security rule being you are trying to protect everyone's privacy. The fact is that HIPAA actually requires organizations to protect the confidentiality, integrity and availability of data. And the reason for that is, if you're in an emergency room and your records need to be found, they better be available, and they better be right. If you're going to be handing information off to an insurance company, you want those claims to be correct. HIPAA says that all those qualities need to be maintained.

What HIPAA says is you need to test your contingency plans for access to the data. You should do penetration testing if deemed appropriate based on the risk assessment, and the NIPH guidelines that organizations follow, that federal organizations have to follow, in fact, describe, how to comply with HIPAA, suggest test of authentication methods and the audit processes in place.

We’re done with testing and onto encryption. PCI establishes, again, very prescriptive encryption requirements. All cardholder data, the primary account number and the expiration date, have to be encrypted. All passwords and authentication data need to be encrypted. Any place where that exists, and this isn't just on portable devices, this is in any database that exists, all this data needs to be encrypted in place.

Massachusetts law is a little bit less stringent about the encryption requirements. You can leave your data unencrypted inside your databases, inside your files, but if it's transmitted on a public network it has to be encrypted. It has to go over SSL or some suitably strong encryption channel. Then, it says you have to encrypt the data while it's on laptops or portable devices. Now here’s some room for interpretation.

The question is: what’s a portable device? Clearly a phone is a portable device. If I ran my business off of my phone and depending on if they’re Massachusetts residents because everyone else is just in trouble. All Massachusetts resident-data needs to be encrypted when it’s on portable devices. The problem with this is it doesn’t say anything about archived data which is a source of a lot of problems.

If I put something on a tape, that’s a fairly portable device. The fact is I don’t think that’s what is implied in the law. There is no encryption requirement for back-up. If you’re taking a portable device, a thumb-drive, a phone, a PDA or your laptop, then you’re required to encrypt it. I’m going to watch this space. It’s going to be interesting to see how the interpretations go.

Then, HIPAA provides guidance, but encryption in HIPAA only isn’t addressable but a not required practice. If you look at it, it says, that the organization has to do a risk assessment and determine whether there is significant risk to the data that would require encryption. Then, there’s some wiggle room as to whether you should or need to or do not need to actually implement encryption controls.

Just in summary, changes occur all the time and your compliance program really needs to be constructed to be able to recognize when those changes occur. Whether they’re new regulations or technology changes or business changes or interpretation of existing regulations in contracts, you need to be able to adapt.

The first thing is to make sure you have a review in place, some sort of periodic review to understand what the requirements are. If you establish appropriate contracts with your outside providers, and you review those contracts regularly and you review the regulations. You review and you look and you do risk assessments regularly, you’ll be able to determine where your exposures are and where your current practices may not meet your regulatory or contractual requirements.

Then, next step is, when you’re trying to meet your regulatory requirements, you have to understand exactly what you’re testing requirements are and implement those into your program and continue to watch those regulations to see whether any of the prescribed tests have changed, whether it’s the scope of the changes or it’s the depth of the testing that needs to be done.

You have to make sure, that especially with new laws like the Massachusetts law who come along, you had better be ready to deploy encryption technology and additional mechanisms and policies where the law prescribes. You will have to go through your entire compliance program and deal with any deficiencies that you find. But the only way you can do that is to have a periodic review of the regulations, the contract that you have with your external organizations and the practices that you have internally.

If you want to pass your compliance audit, the first thing you need to do is after you have gone through the process of assessing where you are and putting whatever changes in place, when you choose to make a change or put a practice in place, make sure you can defend it. Have a good argument in place, and then, have all the documentation in place to not only show why you came to that conclusion, why you think that practice is adequate, but that it actually meets the requirements of the regulation or contract.

That’s my presentation.

More on Data Privacy and Protection

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: