Intersecting state and federal data protection acts and regulations

Compliance managers know that one of the most tedious elements of managing compliance is determining where various compliance mandates overlap and avoiding redundant work.

In this special full-length video presentation, expert Richard E. Mackey Jr. discusses intersecting state and federal data protection acts and regulations from Massachusetts and Nevada and explains why compliance plays such an important role in securing data.

Also discussed is the Data Accountability and Trust Act of 2009, Red Flag rules and more.

Table of Contents:

  • 3:31 - Regulatory Summaries
  • 5:44 - Comparing Regulations  
  • 7:19 - MA 201 CMR 17 
  • 10:28 - WISP 
  • 11:17 - MA Administrative Controls
  • 13:36 - Ma Administrative Controls 2
  • 15:38 - Ma Administrative Controls 3
  • 16:38 - MA Technical Controls
  • 18:12 - MA Technical Controls 2
  • 21:35 - Nevada 603a
  • 24:37 - HITECH
  • 28:30 - Data Accountability and Trust Act of 2009
  • 32:07 - Red Flag Rules
  • 33:32 - Red Flag Requirements
  • 34:53 - Summary

About the author:
Richard Mackey has advised leading Wall Street firms on security architecture, VPNs, enterprise wide authentication, and intrusion detection. Prior to joining the consultancy SystemExperts, he was the director of collaborative development for The Open Group. Mackey is an original member of the DCE Request for Technology technical evaluation team and was responsible for the architecture of the Distributed Computing Environment Releases 1.1 and 1.2. Mackey has been a frequent speaker at major conferences and has taught tutorials on developing secure distributed applications.

Read the full text transcript from this video below. Please note the full transcript is for reference only and may include limited inaccuracies. To suggest a transcript correction, contact   

Intersecting state and federal data protection acts and regulations

Dick Mackey: I'm Dick Mackey. I'm vice president of the system experts. System experts
as opposed to system experts in the URL. I've been in the consulting business,
and system experts is a security consulting firm since 1997. Prior to that from '89 to '97
I was at the old Open Software Foundation where I was technical lead on the DCE project,
so if you're old enough to know what DCE is then I'm sorry. We work with a lot
of different companies, many of which are in New York City, financial companies
affecting New CNE customers here. Usually I round it to a few of them but
what today's talk is all about is the set of new regulations that have come out in the past year.
I'm dealing with both the state and the Federal Data protection Act. There are a couple of
other ones thrown in here for good measure. HIPAA which handles healthcare, which
is the HITAC Act, then red flag rules which I'll talk about a little but I'll keep those to a minimum but
most organizations have to deal with these in one form or another. The red flag rules,
which haven't gone into effect yet deals with any company that offers expense
credit and then anyone who deals with personal health information has to deal with HITEC.

Here's the agenda. I'll talk a little bit of background on state and federal regulations
and how they've come about and then I'll cover each one of the regulations that have
come out in the past year, the Mass 201 CRM 17, which is a data protection regulation. Nevada 603A.
I'll talk a little bit about HITEC, the Federal Data Act which has not actually been passed in the Senate.
It was passed in the House of Representatives, and then fell on hard times in the Senate. We'll see
if it gets reintroduced, and then the red flag rules.  What's interesting about this, and what I hope
you take away from it; there are a couple things you should take away.

One is the state regulations actually apply to anyone who has data from a residence
of those states. Right? So it doesn't matter whether the company is in Massachusetts,
if you have Massachusetts data, you have to comply with it. Another interesting point, I was
giving this presentation out in Chicago, one of the interesting points about it is somebody came
up from I think it was the University of Chicago and said, 'I have people who were students at
the University of Chicago, and they've moved to Massachusetts, does their data now fall under this
regulation?' I said 'Yeah.' The interesting point about it is that since people move, it's hard to imagine
that you don't have to consider almost every piece of identity information that you have as possibly
being protected by these regulations. It's just kind of interesting to see that. I hadn't considered it until
someone mentioned it.

Regulatory summary, 45 states have enacted legislation requiring notification of security
breaches involving personal information. There are even some for territories like the U.S.
Virgin Islands, and so on. The notification processes are somewhat difficult, as I was saying
we even started, one of the interesting problems is that your legal counsel really has to
understand what those notification processes are because we're even going to be talking about
security mechanisms and security programs, and the similarities and differences between
these laws. The fact is that your legal counsel, in the event of a breach, has to understand
exactly how to go through their notification processes for the given jurisdiction.

The interesting problem that exists for all these different 45 states is that they're basically
telling you that the horse is stolen. You wait until the horse is stolen, and then you go tell the
owner of the horse that the horse was stolen. Right? The Massachusetts and Nevada laws
changed the rules to a certain degree. They passed laws to prevent breaches. That's the new trend,
right? To not only deal with the problems after they've occurred, but to try to prevent them
from occurring in the first place. The fact is if you look at the red flag rules, that's also a
prevention mechanism. In a sense that you're trying to detect attempts to steal identity,
and then react to them and try to prevent them in the long run.

If you look at the federal level, the Data Accountability and Trust Act passed in the
House that has sort of just halted in the Senate, is also requiring preventative measures.
It turns out it has to be reintroduced into a new legislative session but the point is that if you
look at it, it's similar to the two state regulations. That kind of is interesting as a trend.
The red flag rules require whole set of rules that are similar, even though the purpose
of them is somewhat different. Let's compare the regulations. Until now, almost all of
the regulations, both at the state and federal level were risk based in a sense that it didn't
prescribe what controls you were supposed to put in place. What they said was, 'You're
supposed to understand the risks, and then try to protect the identity data as much as
possible.' If you look at HIPAA, the security rule and privacy rule. We'll look at Grimly Filly
and California, and then describe the specific mechanisms that structure your programs
very much. In fact, in HIPAA there were only a few mandatory parts of your program.
You had to assess your risk. That was mandatory, but as far as encrypting data or protecting
access specifically, and how you would do it, it wasn't really specified, which I'll talk about
later to a certain degree. The new laws are much more prescriptive. In fact, if you look at them,
they are to the previous laws, as say, HIPAA was at least somewhat prescriptive, and at least had
some controls that were required. Then if you look at PCIDSS has a set of rules to protect the
payment card information, that's incredibly prescriptive. Things seem to be moving more toward
that direction than the other way around. In fact, all of these require both a set of administrative
controls and technical controls.

It's interesting to see the way the program is configured, the administration of
that program, and the specific technical controls that you have to deploy are being prescribed
to a greater extent. Let's talk about Massachusetts. The whole idea behind it is to prevent
identity theft. It's not just to notify. Infact, if you look at the way that Massachusetts 2M1 CMR 17 is
a regulation, it's associated with 93H, the law. The law talks about notification. The regulation
talks about prescribing controls and how to prevent the breach in the first place. It requires
all holders of personal information, no matter where they are, of residence of Massachusetts
to implement procedural and technical safeguards to protect the information. They say in
the regulation that it needs to be consistent; that the mechanisms described here and
structures described here is designed to be consistent with industry standards and.
It's designed to be consistent with the 27,000 series, ISO 27,000, as well as HIPAA and the
Red Flag Rules etc. You start to see a commonality in language if you look at those standards
and those regulations, and you look at Massachusetts. In fact, it's interesting to
see if you go one by one through the Massachusetts regulations, how similar it is
to the controls as they're specified in 27,002.

Giving the fact that this is somewhat prescriptive, one of the interesting parts
is there seems to be this little loophole in the regulation that allows you to be cut a certain
amount of slack, if you're a small organization that doesn't have as much risk associated
with the information that it stores so it says that you can basically modulate the number
or depth of the controls based on four different factors. One is the size, scope and type of
business. If you're a really small business, and you only have a little bit of data, only your
employee data, the number in depth of the controls wouldn't be expected to be as high as
it would be if you're at for example. Right? You have not only your employee
information, but you've got customer identity information, credit card data, social security numbers,
driver's license data. The more data you have, the more you're expected to implement these controls.
The resources available, so if you're a small organization, you don't have many resources to apply
to this problem, you can be cut some slack. The amount of stored data you have and the
need for security and confidentiality of the particular information that you store and process.

In the end tough, if you look at 201CMR17, it looks to be a sign of times
to come. One of the common elements that is specifically called out in Massachusetts,
it's also called out in some of these other regulations is that you have to have a Hormel
security program in place, which is just good practice anyway. All of a sudden,
we've got a requirement to have written policies and a written program. In fact,
what they call it is that you have to have a formal, comprehensive, written information
security program or a wisp. What is a wisp? I first looked at this regulation, I thought,
'OK. What would constitute a reasonable and acceptable wisp?' It could be the full documentation
of your entire security program. That would definitely meet the requirements. At a minimum,
it's going to be documentation of the specific controls that are required in the regulation.
If you just want to meet the requirements to begin with, definitely make sure that all the
requirements inside the Massachusetts regulation are met, but in writing.
Then, what you could do is expand that over time to fall into, or grow into a full
documentation of your information security program.

As I said, there are two different types of control specified inside the Massachusetts Regulation.
One is the set of administrative controls. This is a common element to all of them, a designated
person, or group responsible for managing the security program. You have to appoint someone
who's responsible, or at least a role in the company or a group who's responsible for running the
program. Good practice anyway. Right at the front of your security, your policy it ought to say
who's responsible for maintaining it. This is a regulation that's requiring you to say, 'look, the
protection of Massachusetts residence data in this company is this person's, or this role's, or this
committee's responsibility.' You also have to have a risk assessment and management program.
If you look at HIPAA, this has been in place for a long time for health information. You have
to have a formal risk assessment method that is repeatable. You have to look at what
you're trying to protect, understand what the threats are to it, understand what the controls
are that mitigate the risk of those threats becoming a reality, then determine whether that's
an acceptable risk and apply remediation if necessary. But you have to have a program that
regularly assesses and treats risks. I've got to tell you, a lot of smaller companies, typically
don't have risk assessment methods and mechanisms and programs in place.
A method of assessing the effectiveness of controls and protecting specifically
personal data. That's interesting, because as part of your risk treatment, you have
to be able to determine whether the controls that you have in place are effective.
That's part of a risk assessment. That's almost saying the same thing again. The point is
it clearly calls it out, that you have to have a method for determining the effectiveness
of the controls that you deploy.

An employee contractor training program, some way of making sure that once
you set these policies, your employees and your contractors understand how
to imply them. A set of security policies and procedures that are written down,
a method of monitoring employee compliance and in fact those bills will come up later.
Because if you have an employee that breaches a policy you have to have a disciplinary
policy to deal with that.

Training, I already talked about, policies and procedures, and then a method for
employee compliance. Another is that you have to have monitoring your review so
that you can detect when security mechanisms are working or not working. You have
to have specific policies and procedures relating to the storage access, transmission,
and handling personal data. That's what this is all about. Basically you have to have
an information classification and handling policy. That becomes reality when you start
looking at the technical requirements. Because you can't transmit this information
across the internet unencrypted, and you can't store it on portable devices. You have
to have policies that say how this information needs to be treated, disciplinary measures
for noncompliance, and a reliable method for promptly disabling access for terminated
employees. What's interesting is if you look at the structure of this law and you wanted
to go through the checklist, to check off the problems that have led to breaches in the past.
You would see almost all of them listed as controls that have to be applied.

You have to have, this is an important one, a program to ensure that third parties
with access to personal data are both confident to protect that data, and
contractually obligated to protect the data, and are in compliance with this. In fact,
I've heard many people say, and I agree with it. That this is going to be the driver
of compliance for organizations, because if you're asked by either a customer or
business partner whether you're compliant, you're going to have to make a statement,
and you don't want that to be known. The point is that even if there isn't really as big a
threat of enforcement from the attorney general's office, you'll be asked this on a continuing
basis by any organization forced to comply with this. If they're sharing data with you,
and you're responsible for protecting it, then you're going to have to make some representation
of your compliance. There bound to ask that question and vet your processes to determine
whether you are compliant. Then you have to have physical controls in place to protect
not only the systems, the networks that transmit this, but also the paper. This is not just an
electronic protection act. This is also any form of this information has to be protected.

Annual review of security measures and reviews whenever there's any kind of
material change in the business practices that may affect something, some unknown
something, because I didn't finish that sentence, that may affect the security of this data.
What's interesting too is that just like 27000-1, this regulation requires you to look at your
entire program, and determine whether it's adequately addressing the real threats you have,
even in the face of changes in your business, in your technology, and in all the relationships that you might have.

Here are some of the technical controls that come out of those administrative controls. You have
to have secured user authentication methods, including secure protocols that don't expire
both passwords on the network, strong passwords, secured password storage, unique user ids,
all of those same sorts of controls that you see in 2700-2. Active control mechanisms that restrict
access to only active users, so you can't just put these things out on shares that have everybody
have access to it. Automatic lockout after multiple failed access attempts, tight access controls,
restriction of access to those with business needs. In fact, if you look at this regulation, it's similar
to all the other types of regulations, it's probably more important to ensure that there
are people involved in specifically authorizing access to this information than it is to
have incredibly tight technical controls. If you can say that a supervisor approved access,
and that person has access, and there's a process for reviewing who has access
that is a strong statement, whether you're serious about actually protecting this information.
Rather than just simply looking at what type of technology you've got deployed,
it's more who has access to it and whether they should have access to it that’s
important here. Then, removal of all vendor default accounts, which is sort of a nod towards
hardening existence, to make sure that you're at least following reasonable security practices,
to ensure that they're not vulnerable.

Here's some new ones. These are some ones that have just started shading towards
PCI payment card industry data security standard, is encryption of personal records
and transmitted across public wireless networks. It doesn't mean that all your wireless
networks have to be encrypted but at least the data has to be encrypted if it's moving
across a wireless network, and any public network, any network that you don't have control over.
Obviously that's the internet, but it's any other network that someone else has control over.
monitoring systems for unauthorized use and access to personal information so you have
to have logs, and you have to look at those logs. Encryption of all personal information stored
on laptops and other personal devices, and this freaks people out, because all of the sudden
I have to have some sort of encryption. What's another portable device? It's certainly smart
phones. It's certainly thumb drives. It's certainly backup tapes. Then people say, 'Well, what
about all this data that I already have?' I think, so far I think the stuff that already exists
is grand fathered. But there's and interesting point that goes along with all of these specific
technical controls. If you've got an environment where it would be unfeasible to implement
certain controls that can be an argument, that you don't have to comply with that particular control.
But you'd have to look for other methods to reduce the risk of that control being missing.
The issue is, that if you've already got a whole set of smart phones out there with this kind
of data on it, you can't get the toothpaste back in the tube. The question is what can you do?
You'll be forced overtime to figure out a way of dealing with that, but you wouldn't be immediately
non-compliant because you didn't deal with it directly.

You'd need an internet firewall, to protect the information and you need a vulnerability
management program in place. Then, encryption of all personal data on laptops or portable devices,
internet firewall, and vulnerability manager, that's where I was. Vulnerability management
is interesting because until this regulation, the only place you would see this would be in PCI DSS.
All of a sudden they're saying, 'You've got to keep your systems up to date.' You
have to apply patches in a reasonable amount of time, and you have to understand
what the point of these patches is, and you have to protect yourself against viruses. You have
to have your virus protections up to date as well. Again, look at the check-marks in the
boxes that are dealing with problems that have occurred and led to data breaches in the past.
Someone who didn't update their web server, someone who didn't update the patches on any
internet facing service, or someone who didn't protect themselves against viruses have led to
breaches in the past.

That's Massachusetts. There's a lot. It gets easier from here  in a sense because
the laws that have come after Massachusetts are less prescriptive, but they're
similar in their structures. If you look at Nevada, Nevada calls anyone who has
Nevada resident data a data collector. The high level statement, Nevada 603N makes
is that you have to implement them making reasonable security measures. This is similar
to the language you see in California for example. You're supposed to protect the information
but they don't tell you much about it. What you're trying to do is stop unauthorized access,
acquisition to structural use modification disclosure of the personal Nevada resident data.
This is where it gets really scary. In Nevada, if you're a merchant, and you accept payment
cards for payment, you're all of a sudden not only required by contract with your merchant bank,
to comply with PCI DSS, you're required under this law to comply with PCI DSS. If some
resident of Nevada uses a credit card, all that credit card data is protected under state
law by the PCI DSS, which is the most prescriptive standard I've seen. Another one is that
they actually have the same two statements in Nevada that they have in Massachusetts for
encryption. You have to encrypt the personal data when it's transmitted, that they say when
it's transmitted outside the control of the system, of the data collector. And then encryption
of storage devices when they leave the control of the data collector, so outside the secure
boundaries. That's kind of interesting. You don't have to encrypt backup data unless they're
going off site, and you don't have to encrypt laptops if they stay on site.
It's kind of an interesting approach.

Then similar to all these other regulations, you have to have contracts with business
associates to maintain these same controls. 603A requires notification. This is both,
the law has all the same items that the regulation in Massachusetts has, allows
civil action against those who breach, provides restitution by the perpetrator to
the data collector, and attorney general may bring in a junction against the violator.
That's true in all these states. Organizations are not liable if they're compliant. That's important,
because basically what you're after here is if you can prove you're compliant at the time
of a breach, you actually have safe harbor under the law. I did everything you asked,
we still had a problem. The state is actually protecting you in that case. It doesn't apply
to communication providers, which is interesting, but a lot of these don't. The middle man is not affected.

I'll go briefly into HITEC. It's interesting that even though we're at financial information
security decisions, a lot of financial companies still have health care data. It's interesting
to see because HITEC is taking a similar approach to protecting data. This is starting to
affect more and more companies. The Health Information of Technology for Economic
and Clinical Health Act, HITEC Act,  was passed in 2009, as part of the  American Recovery
and Investment Act. Basically, what this legislation does is it's trying to encourage nationwide
interchange of healthcare information. What the government did was they appropriated $20
billion in incentives to get people to start transmitting healthcare information electronically
with one another, to improve the quality of care in the United States. So it sounds like a good deal,
but they recognized there to be more risks, so they strengthened the Federal Privacy and
Security Law to protect identifiable health information. If your company deals with identifiable
health information, this applies to you. It establishes a Federal Breach Notification Act for
information that's not encrypted. It requires notification of unauthorized disclosure to any party.
It expands on compliance requirements to all organizations. So in the past, if you looked at the way
HIPAA was constructed, HIPAA dealt with these entities called cover identities, which were like
hospital's insurance companies. What those organizations would do is they were responsible
for policing the organizations they handed information to, so business associates is what they
are called. The HITEC Act said let's say that anybody who has access to this data is now
responsible directly for protecting it and can be sued directly if there's a breach.

They added a couple of new requirements, for example, that allows individuals to request
an audit trail of where their information went, and most organizations don't have
that mechanism in place, so that forces organizations to deal with that. It shuts
down a secondary information mining market. In particular, if you and I go to a
hospital, it used to be that those organizations could look through their health records
and see who they should ask for marketing and fund-raising. They could send you all the
fund-raising messages because you had been a patient there. That's disallowed under
this, and it strengthens enforcement of the Federal Privacy and Security Laws. By increasing
penalties for violators, and it provides more resources to fund audits, that's a big deal.
What's the impact of this? Growing markets for electronic interchange means there's
small risk, and there's broader applicability of the security rules. Therefore, there's
a greater need for the ability to not only understand your own compliance, but
understand your partner's compliance. One of the problems that has been true
all along for all these organizations in healthcare industry, and now even a broader set.
Ts that measuring compliance with the security and privacy rules has always been a challenge
because there's really no detailed specifications what it means to comply. If you look at
Massachusetts and Nevada, it's really clear, or relatively clear, what it is you have to do to
comply. You look AT HIPAA, it's much harder. You have to go to missed guides and implement
various controls. The problem has been compounded with HITEC, particularly because there are
more people who fall under this.

With HIPAA, if you have healthcare data, you should take it very seriously
because HITEC is actually increasing the funding for more and more audits and
the penalties are higher. Now we're done with healthcare, and now we look at the
possibility of the data accountability and trust act coming about. It passed thousands
of representatives. The idea behind this is, instead of dealing with the 45 plus state laws,
you deal with one notification data protection notification act at the federal level. There's
been a debate whether the states would actually allow their authority to be superseded
in this case. Most states hold their own control very tightly. The idea behind this is
to supersede state notification laws. One of the interesting points is I didn't talk much
about what is considered personal information. If you look at it, it's all the usual suspects
in most states. Right ? It's Social Security Number, payment card data, driver's license IDs,
bank account information. Some say with pins, some say without pins. At the federal level,
the Data Accountability and Trust Act only say that a bank account number is personal
information if it's accompanied by a pin. What the Data Accountability and Trust Act does,
is it focuses on some of these large information brokers out there like the Lexus Nexuses
of the world, and people who have immense amounts of data.

What it does is forces them to check on the accuracy of the data that they store, to do policy audits regularly, to provide individuals access to the data that they store. If you or I call and ask, we can find out what they have about us, which has never been possible in the past, then submit themselves to post-breach audits. If they have a breach, they have to submit to a post-breach audit. Similar to Massachusetts and Nevada is you have to have a formal program of administrative and technical controls.  Again, a responsible officer or group of people, a formal security policy to protect the personal information that they've been entrusted with, a vulnerability management program that assesses what vulnerabilities are out there, monitors the systems to ensure that they understand exactly what vulnerabilities are where, and that they applied patches reasonably when necessary, and deal with viruses as well, and that they have to have secure data destruction procedures. They don't let a disk just be thrown in the trash, and then someone just go harvest all the information off of a disk or a thumb drive or whatever, and notification procedures. That's what the Data Accountability and Trust Act says.

What's interesting about that is that it's not as prescriptive as either Nevada or Massachusetts,
but it's trying to simplify the notification procedures and put some level of control in place
and some formality in place associated with protecting personal information. It's interesting.
It'll be interesting to see. I don't know what caused it to just die. The Senate has got a lot on
their hands but it hasn't gone anywhere. If you look at I think it's govtrack, just to see what
kind of regulations are happening , there seem to be so many different regulations that are
introduced on this topic, that just fall apart after some months of debate. It was kind of surprising
that it actually passed in the House. Similarly, if we look at another regulation that has not
gone into effect yet, it's been postponed, and postponed, there's a red flag rule.
How many people here have to comply with the red flag rules?

OK. Any organization that extends credit. This is kind of interesting because
even smaller organizations, anyone who allows you to pay over time, that could be a
dentist's office, has to comply with this. It's enforced by the Federal Trade Commission.
It requires all organizations that extend credit to look for signs that some identity theft
attempt has taken place. Examples and those attempts are red flags. Right? Examples
of red flags would be a mismatch of a name and a Social Security Number on an application
for credit. Mismatches of Social Security Number and any other personal information
like the name, the age, or the address. Right? The appearance of a known stolen identity.
If you went to a credit reporting agency after you got an application, you'd check this
person and you'd say, 'Wait a minute. This entire identity is on the list of known stolen identities.
We're going to check more closely into this,' or any kind of suspicious documents, known, forged documents.
All the application information looks fine but the perpetrator signed his name instead of the
actual name on the application.

What do you have to go to comply? You have to have formal governance in place.
Formal governance is a sign of responsibility. In fact, this is one of the few regulations
it actually calls out that you have to have visibility all the way up at the board of director's
level. Assessment of risk of identity theft in the context of a particular business, because the
program for seeing red flags is unique to the organization that's trying to
implement this red flag program. You have to have a specification of the types of
red flags that you're going to look for in the program, policies and procedures to effectively
recognize red flags, and respond to red flags, training for anyone who's going to be involved
in this process, and then service provider oversight. In the same way this is a partner
management program. Instead of just handing data off, now you're handing processes
off that are supposed to look for attempts on identity theft. If you hand any of the call center off,
or processing of applications, the red flags have to be looked for just as well by a partner as thy
would by you and your program. Your program has to take account of that.

That's the end of the presentation on the state and federal laws. What you should take away from this,
is this idea that, the trend here that we're seeing is much more prescriptive controls. But there's
still this idea that the program that you put in place should be dealing with the risks that you
organization recognizes associated with the data that you're trying to protect. That's why
every one of these programs has sort of the same structure to it. Right? That you have
to have governance in place. You have to have written policies. You have to have data
controls in place to make sure that only the right people have access, and that you have to have
identity management and access in place to know who has accounts on your various systems,
and that they're authorized to have access, and they're terminated when they're
supposed to be terminated. You have to have risk assessment in place, vulnerability management,
incident response, and monitoring an assessment on a regular basis to make sure
that your program is actually effective.

If you have a security program in place that meets all of these requirements,
you're going to be in good shape. What you want to do is the best approach. Rather than
looking at each one of these laws and trying to tackle them one by one is to establish a
general security program with all the general elements in it. Then, tweak each one of those
sections to meet the requirements of these federal and state laws.


View All Videos

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.