As computers and networks have become more complex, so too have approaches evolved for securing them.
Requires Free Membership to View
In this CISSP Essentials Security School lesson, Domain 4, Security Models and Architecture, noted CISSP certification exam trainer Shon Harris investigates the framework and structures that make up typical computer systems. The special video presentation below sketches the evolution of security models and evaluation methods as they have struggled to keep pace with changing technology needs.
Before watching the special Domain 4, Security Models and Architectures video, it's recommended that students first read the Domain 4 spotlight article, which provides an overview of the concepts presented in the video. Key spotlight article topics include computer and security architecture, namely the framework and structure of a system and how security can and should be implemented; security modes and models, such as the symbolic representations of policy that map the objectives of the policy makers to a set of rules which computer systems must follow under various system conditions; system evaluation, certification and accreditation, methods used to examination the security relevant parts of a system (e.g. reference monitor, access control and kernel protection mechanisms, etc.) and how certification and accreditation are confirmed; and common threats and vulnerabilities specific to system security architecture.
After watching the video, test your comprehension of this material with our Domain 4, Security Models and Architecture quiz. Upon completion, return to the CISSP Essentials Security School table of contents to select your next lesson.
About Shon Harris:
Shon
Harris is a CISSP, MCSE and President of Logical Security, a firm specializing in security
educational and training tools. Logical Security offers curriculum, virtual labs, instructor slides
and tools for lease by training companies, security companies, military organizations, government
sectors and corporations.
Shon is also a security consultant, an engineer in the Air Force's Information Warfare unit, an entrepreneur and an author. She has authored two best selling CISSP books, including CISSP All-in-One Exam Guide, and was a contributing author to the book Hacker's Challenge. Shon is currently finishing her newest book, Gray Hat Hacking: The Ethical Hacker's Handbook.
CISSP® is a registered certification mark of the International Information Systems Security Certification Consortium, Inc., also known as ISC(2).
Read the full transcript from this
video below:
CISSP Essentials training: Domain 4, Security Models and Architecture
Host: Welcome to SearchSecurity.com's CISSP Essentials:
Mastering The Common Body of Knowledge. This is the fourth in a series of 10 classes exploring the
fundamental concepts, technologies, and practices of Information System Security corresponding to
the CISSP's Common Body of Knowledge. In our previous class we examined cryptography; today
lecturer Shon Harris, will give a presentation on the two fundamental concepts in computer and
information security. The security model, which outlines how security is to be implemented, and
then the architecture of a security system, the framework and structure of a system.
Shon Harris is a CISSP, MCSE, and president of Logical Security a firm specializing in security
education and training. Logical Security provides training for corporations, individuals,
government agencies, and many organizations. You can visit Logical Security at
www.logicalsecurity.com.
Shon is also a security consultant, a former engineer in the Air Forces Information Warfare Unit,
and an established author. She has authored two best selling CISSP books including, CISSP All In
One Exam Guide and was a contributing author to the book Hacker's Challenge. Shon is currently
finishing her newest book, Gray Hat Hacking: Ethical Hackers Handbook. Thank you for joining us
today Shon.
Shon: Thank you for having me.
Host: Before we get started I'd like to point out several resources that supplement today's
presentation. On your screen the first link points to the library of our CISSP Essentials classes
where you can attend previous classes and register to attend future classes as they become
available. The second link on your screen allows you to test what you've learned with a helpful
practice quiz on today's materials. And finally, you'll find a link to the Class Four spotlight,
which features more detailed information on this domain. And now we're ready to get started. It's
all yours Shon.
Shon: Thank you. Thank you for joining us today. Today I will be looking at security architecture
models and really what this domain looks at is the inside of an operating system or an application.
The components of the computer itself, the components of the software, how the components all work
together to properly protect the environment for the user. And this is really important is that all
the components work in concert to provide a certain level of protection for the environment because
the complexity that's involved with an operating system, applications that are involved and how all
these pieces and parts communicate to the motherboard, peripheral devices in the CPU has to be
properly architected and implemented. And we have certain models that we can use to help direct us
on choosing the right architecture and how to properly develop the architecture.
So in this domain those we're starting with the basic components of a computer looking at the CPU,
address buses, and data buses, and input/output devices, interrupts and how they all communicate.
Then we go into looking at the differences between threads and processes, and how multitasking
works, multiprogramming, and multiprocessing. We look at the protective mechanisms within every
operating system. A protection ring architecture, the process isolation processes, the security
kernel reference model, these are the things we'll look at, [and] virtual machines.
And then we go into the actual models and there's a whole range of models that can be used. The
[inaudible] model, the Bell-LaPadula Model, the [inaudible] model, the Clark-Wilson model and we'll
quickly talk about where these models come into play. Then we look at the different evaluation
criteria that have been used really around the world for quite some time and their purpose and the
certification and accreditation and this domain goes through a lot of different types of attacks
that could happen against the components that have been developed to properly protect the overall
system.
So like I said, this domain starts with the basic pieces of every computer and the processor part,
the different types of memory. Memory protection is huge in an operating system to ensure that data
has the correct integrity, correct level of confidentiality, and that the environment the operating
system is stable for the users and for applications to work. The core, the kernel of an operating
system, has a memory map or a memory manager, and you need to understand the role of this memory
manager and what it does is basically processes are going to request segments of the memory, the
manager has to allocate segments of memory, but also it acts as an access control that [ensures]
the right processes are accessing the right memory segments and they don't step on each others toes
and corrupt each others data, but also that there's not allowed for covert channels to take place,
which we'll look at the end of this presentation.
So the different types of memory within a computer system, cash versus real memory, storage types,
secondary storage, primary storage. The differences between what a process is and a threat is: a
threat is an instruction set that is dynamically created and destroyed by a process. The process...
all applications run as processes, and you need to understand some of the security issues around
how processes interact with each other because if they're not properly controlled, if the operating
system does not properly control how processes communicate, that directly affects the whole
stability and security of the system. And we also go through different types of languages and the
generations.
So we start off with just the CPU itself and the components of the CPU and you need to understand
the components and how they actually work together. The CPU is not, most CPU's today are not
multitasking. Operating systems are multitasking, meaning that they can carry out several different
tasks at one time. What that really means is that you can several processes that are doing their
thing at one time. An operating system can deal with it, it can respond to all of these process
requests. The CPU is not multitasking; it deals with one instruction set at a time. So the CPU is
time sliced and shared between all of the processes in an operating system and this has to be
controlled properly. What happens is when we have software and hardware interrupts that are
assigned a peripheral device that have a hardware interrupt, if it needs to communicate to the CPU,
it has to wait until it's hardware interrupt is called and then it's instructions go over to the
CPU.
Now the CPU has it's own type of storage, which is called registries and these are temporary
holding places. Really the instructions don't live in the registers. The addresses to where the
instructions are held in memory, that's really what is being held in the registers for a CPU. So if
a printer needs to communicate with the CPU once it's interrupt calls its request, the actual
address to where it's instructions and data are held in memory will go into the registers. The
control unit is the one that is actually controlling the time slicing for the CPU since there are
so many things that are competing for the CPU's time, a control unit will basically be like the
traffic cop saying, "Okay, your instructions and data will now go over to the CPU."
And when that happens an ALU does it's work. It carry's out mathematical functions, it carry's out
the logical functions because what the operating systems and all applications all they are sets of
instructions. The sets of instructions have empty variables and this is where data gets put into.
So the instruction basically tells the CPU, "This is what I need you to do with this data." So the
instruction set and the data go over to the CPU, it does it's work and it sends it's response back
to the requesting application to the memory address.
Now I said that memory management is very important and there's... memory management is responsible
for not only ensuring that processes only stay within their memory segments, but some memory
segments are shared. We have shared resources within an operating system and so that has to be
controlled tightly.
We have... you need to understand how paging works, how virtual
memory works. Virtual memory means that the operating system is basically fooling itself into
thinking that it has a lot more memory than it actually does. So as you use a part of your hard
drive, the page file once your RAM gets filled up, it's going to move this data down to your hard
drive, and when an application needs to read the data that's on your hard drive now, it's called a
page fault and the data is moved from your hard drive up into memory so you're requesting
application can interact with it.
And as our operating systems become better, as we have created more stable operating systems over
time, we don't have as many memory problems as we did in the past. You may be familiar with, or you
may remember in Windows 95 or 90X, we had certain types of errors and blue screens and fatal errors
that took place because those systems had inferior memory management compared to the Windows NT and
2000 family. And that mainly has to do with, the 9X family had to be backwards compatible with some
of the 16 bit applications where NT in 2000 didn't.
Now processes can work in different states and these are not all the states showing here; there are
other states that the processes can work in. Again the operating system has several different
processes, your applications, utilities they all work as processes, and the processes that are
stopped, you can stop a process it depends on what operating system you're using. If you're using a
Linux or a Unix system then you'd use a kill command, and the kill command is just a utility that
you send a parameter over to the process.
In Windows you'd used the task manager. If the process is in waiting state, that means that it's
waiting for it's interrupt to be called. I said that we have hardware interrupts and we have
software interrupts, the process has to wait for it's interrupt to be called so that it can send
it's instructions and it's data over to the CPU. If a process is running, that means that it's
waiting for the CPU to finish it's task and send back the reply or the result of what it sent
over.
Now, we've gone through kind of the evolutionary path of different languages. The CPU only
understands binary, ones and zeros, so everything actually ends up into a binary language so it can
be processed by the CPU, but we figured it's not very easy for human beings to program things in
bunches of ones and zeros, are brains don't work well with that type of representation. So the
first thing that we came up with is a assembly language. Assembly language is basically a
hexadecimal representation, it's a different type of encoding process. Assembly language works very
low of an operating system versus a higher level language. A higher level language... we have
several generations there, but the benefit of a high-level language is that if I'm going to write a
program in a high-level language, I don't have to be concerned about a lot of the details of the
system itself. I don't need to understand all the memory addresses and how to move data from one
memory to the next, but if I'm working at an assembly level, I do have to understand and work with
that detailed type of information. So a high-level language allows me to create much more powerful
programs because I only have to be concerned with the complexity of my application, versus being
concerned and understand the complexity of the actual environment.
In no way does that mean we don't use assembly language anymore. We use assembly language to do
things like develop drivers, drivers for our operating system. And a high-level language can be
complied or interpreted, that just means interpreted language is a script. Individual lines will be
turned into the machine language where compiled means the whole, all of the source code is turned
into object code. Source code is what the programmer will write out. That gets complied into object
code. The object code is for a specific platform, for a specific CPU, and for an operating system.
And then the object code gets converted into binary machine language when processing actually needs
to take place.
Now a majority of this domain actually studies and looks at the different protection mechanisms
within really every operating system we use today, and all operating systems work on a type of
architecture where it can assign trust levels to the individual processes and the components within
the software. The higher the trust of a process or a component the more that process can carry out.
Memory segmenting is very important, which I kind of covered, but you really need to understand in
more depth for the exam.
Layering and data hiding have to do with low-level processes not being able to communicate with
high-level processes. Most situations we don't want lower level processes that are not trusted to
be able to communicate directly to high-level processes because we don't want them to corrupt them.
In an operating system what happens is that the low-level processes, when they need to communicate
with services, operating services or to the kernel or something, the request gets passed off from
something less trusted to something more trusted. I'll talk a little bit more about that when we
get into the protection ring.
Virtual machines: you need to understand how they work. A virtual machine is just a simulated
environment that is created for different reasons. We have virtual machines that are created for
applications that can't work in a certain type of environment. We create virtual machines for
protection mechanisms as when we're downloading Java applets, the Java Virtual Machine comes up
with a sand box. And we use virtual machines for VMware or those types of tools where we want to
run more than one operating system on the same physical computer.
Now protection rings are very important and they're used in every operating system. I find that a
lot of people don't really understand what protection rings are or how they come into play. A lot
of people just kind of memorize the stuff, but the CPU actually is setting the architecture for the
protection rings of a system. The CPU, your processor has a lot of micro codes in it and you're
operating system has to be developed to work with that CPU and be able to understand and work
within that certain architecture and that's why some operating systems can't work on certain
processors because they're working within two different architectures, they can't communicated back
and forth.
So it's really the CPU's that's driving this protection ring idea and you can conceptually think
about protection rings as individual rings as we're showing here. Ring zero, one, two and three,
the less-trusted processes will be running at a higher level ring. So our applications are less
trusted they'll be running in ring three, the most trusted are running in ring zero and that's
where our kernel of the operating system works. The rest of the operating system works in ring one
and then possibly some utilities will be working in ring two. Now the reason that this is set up is
that when processes need to communicate to the CPU the CPU is actually going to run in a specific
state or mode.
The processor can run in user mode or privileged mode and it depends on the trust level of the
process that is actually sending a request over to the CPU. If it's a less-trusted process, so it's
an application that runs in ring three then the CPU is going to execute those instructions in user
mode and that really reduces the amount of damage that these instructions can carry out. Because if
something's being executed in user mode then it's contained, the instructions can't carry out more
critical tasks, or do certain types of damage. If processes are more trusted and send instructions
over to the CPU then the CPU would work in privileged mode which will allow for the instructions to
be able to carry out more critical and possibly damage or possibly dangerous activities.
So the, and the protection rings it also, there's an active control of how the processes
communicate to each other. A process in ring 3 cannot talk to, directly talk to something that's in
ring 1. That's why we have operating system services. A less trusted process sends it's request to
an operating service, the operating service will carry out this request, bring the result back to
the less trusted process. So that's how your applications communicate through the operating system
and to core components of the system. And all operating systems use this type of architecture it's
just that the different operating systems use different numbers and they may put different items in
the different rings.
Now TCB is a pretty important concept, Trusted Computing Base, and what that refers to is all of
the software, firmware, and hardware components that are used to protect an individual system. Now
this term actually came from the Orange book and we'll talk about the Orange book, but the Orange
book is an evaluation, it's an evaluation criteria. So let's say I develop a product, I need it to
be evaluated and so it actually gets some type of assurance rating like a C2 rating or B3 rating.
So I have a product, I'm going to send it off to a third party so that they'll rate it, they'll
test it, and they'll assign it some rating and then my customer base will look at this rating and
will understand the level of protection that my product provides.
Trusted Computing Base is really just a concept of all of these components that provide protection
for the system, but what's important is these are the components that are actually tested during
the assurance testing. When I send my product to go and be tested under one of these evaluation
criterias [sic] it's these protection mechanisms that will be properly evaluated because the
assurance rating is determining the level of protection my product is actually providing. So that's
the whole reason of the components software, hardware, firmware they're all over in an computer
system. It's not like they all live in one little place in an operating system, but that's the
reason to conceptually but them in one place, basically putting everything in one pot you're saying
these are the things that will be tested when it needs to get some type of assurance rating.
Now every operating system also uses a security kernel and a reference monitor. A reference monitor
is more of a concept than it is actually code. It's referred to as an abstract machine and the
reference monitor concept basically outlines how subjects and objects can communicate within an
operating system. A subject is an active entity that is trying to access something. An object is a
passive entity that is being accessed. So it's extremely important that within the operating system
that the types of access that are taking place and the operations that are being carried out after
access is allowed is properly controlled. The reference monitor is basically the rules of how
subjects and objects can communicate within a operating system and the security kernel is kind of
the enforcer of those rules.
The security kernel is a portion of the actual kernel of an operating system and it carries out the
rules of access. Now you as a network administrator, you can choose some of the, you can configure
some of the access rules like you can choose, here we have Katie has no access, Jane has full
control, Darren has right access. This is just a very short list of really all the things that the
security kernel is doing because you're being able to control what users interact with what
objects, but the security kernel needs to make sure that it's properly controlling how all the
processes are communicating within the operating system. So these are the things that the
developers of the operating system have to understand the complexity of an operating system and
when applications are put in there how do all of those things communicate in a secure mode.
This brings us to why we even have models. I find a lot of people, a lot of my students don't like
these models because they're not familiar with them. You can be a security professional for 20
years and have never heard of one of these models and so I find that a lot of people feel, well I
don't need to know anything about this and this is kind of wasting my time, but I don't really
agree. If you go and you get a degree in, if you get a CIS degree or if you get a higher like a
Masters or a Doctorates degree in computer security, you're going to be covering a lot of these
models because they are important.
And again what I find is a lot of people just memorize the necessary things that they need to
memorize for the models to be able to dump it on to the CISP exam which is really too bad. It's
better to understand a lot, not only the characteristics of a lot of the components that are
covered for this exam, but what is the place in the world. Because if you understand where
everybody's place is in the world you have a much bigger and broader understanding of information
security and you have a better understanding of how it works in depth.
So we have all of these different models and they are theoretical in nature, they're conceptual in
nature and these models, there's several different types of them because different vendors have
different security requirements of their products. If I am going to develop a firewall then I'm
going to have a lot of different security needs than if I'm going to develop a product that's going
to do online banking. So it depends on what type of level of security and functionality that you
need within your product or what model you choose.
So the model really works as you, and think about how does a vendor know what level of protection
they need to provide? Well they know that by the customer base that they're going after. I have to
understand my customer base, the level of protection that the customer base is going to require of
my product and then I need to find a model that's going to help me meet those needs. So instead of
just having all the programmers start writing code without any clear direction I can start with a
model and what these do is outline, if you follow these specific rules then this level of
protection will be provided. So the model starts at the conceptual level, it starts even before
you’re in the design phase of your product. You choose the right model, you go into your design
phase, you go into your specification phase, and then you start actually programming.
Now there are several different models, one that gets hit often is the Bell’LaPadula Model and the
Bell’LaPadula Model was developed in the 70's. The United States military was keeping more of its
secrets in time shared mainframes. It's keeping more and more of its confidential information in
these mainframes and in the military you have to, not everybody can access all the information,
it's based on your clearance and the classification of the data. So basically the United States
government said, okay how in the world do we know that the operating systems on these mainframes
are keeping track or making sure that people are only accessing what they're supposed to. This is
why the Bell’LaPadula Model was developed it is providing the structure of any system that is going
to provide math protection, mandatory access control.
Mandatory access control means the system is making decisions on if a subject can access an object
by comparing it's clearance and it's classification level. So the Bell’LaPadula Model really has
all the rules on if you want to create a Mac operating system, you want to provide this level of
protection, these are the rules that you need to put in place. So Bell’LaPadula Model was the first
mathematical model that incorporates the information flow model and the state model and it uses
different rules that you need to know about as the star security rule and the simple security rule.
And really the importance of the Bell’LaPadula Model kind of set the stage for how all of our
math-based systems are going to be developed, but it kind of affected us even after that because
the Orange book was developed to actually evaluate Mac systems, systems that were built on the
Bell’LaPadula Model and we kind of expanded from there and I'll talk about the Orange book in just
a minute.
There are several different models of course, I don't have time to go through all of them and even
the Bell’LaPadula Model you need to know more details about it than just what I'm saying, but a new
model is called the Brewer-Nash model it's also referred to as the Chinese Wall and what this model
does it tries to address conflicts of interest in specific industries. So for example let's say I'm
a stock broker there could be conflict of interest if all of you are my customers and I go to into
each one of your accounts and try to figure out if you're selling or if you're buying or what
you're doing to tell another one of my customers information. So I'm giving them the trends, look
everybody's selling this so you should probably go ahead and dump it. That's conflict of interest.
So instead of just relying on humans doing the right thing there are products, there are software
that can try to enforce that these types of things don't happen.
What the Brewer-Nash allows is that for dynamically changing access controls. So if I go into one
of your accounts, you're all my customers, I go into one of your accounts, now I can't access
anybody else's account until I come out of that system. So whatever activities that I'm doing on
your account I can't go check on three or four other accounts and then come back and make the
change to your account. Another example, let's say you and I we work for marketing companies, our
marketing company deals with, our marketing companies customer search financial institution so you
let's say your customer is Bank One and my customer is Citibank. Now we work for the same marketing
company but when I'm working with my customer's information I should not be going and looking at
your customer's information that could be held on the same system because our customers are
competitors and I shouldn't be able to go and find out your information and try to one up
you.
So these are some examples of where the Brewer-Nash model could be used and how they're trying to
put controls in to ensure that users cannot carry out things that would be conflict of interest.
Now what's important to realize is these are just models, these are theoretical models, these are
high level direction and instruction. It does not tell any vendor exactly how to program anything.
So it's up to the vendor to actually meet the requirements and the objectives that is laid out in
the model.
Now this domain also goes through the different evaluation criteria and the Orange book has been
hit pretty hard on the exam although the Orange book is being replaced by the common criteria, but
it doesn't mean the Orange book isn't important. The Orange book has been used for 20 straight
years and what it is, is a set of ways to actually test a product. So I go, I'm a vendor I find out
what the level of protection by customer base needs. I know what they're going to require of my
product. So I need to know how to meet a certain level and within the Orange book it uses a
hierarchical rating system, A-D, "A" is the highest you can get, really from my understanding from
the commercial sector there is only one "A" system I think Boeing uses it, but all the other "A's"
are used in government sector where they're keeping secret, secret, secret information.
We're most familiar with things that get a "C" rating because they're using the DAC model,
Discretionary Access Control model. So let's say that I'm a vendor and I know that my customer base
is going to require a B2 rating of my product. I need to know how to achieve that B2. So that's
what this criteria can be used for so that vendors know how to achieve a B2 and then the criteria
is used for the evaluators and for each one of these ratings there are certain types of tests and
they're are certain types of things that need to take place before it gets a stamp of one of these
ratings. So the criteria is all of the procedures, maybe if something is going to get a "C" rating,
it's not going to go through the scrutiny and the types of tests if something is getting a "B" or
an "A" rating.
So the Orange book was developed 20 years ago and it was developed specifically to evaluate the
operating systems that were built upon the Bell’LaPadula Model to determine what level of
protection is being provided. Now there's a lot of literature indicates the downfalls to the Orange
book. It only covers stand-alone systems, doesn't cover networking, it doesn't cover integrity and
availability, it only covers confidentiality. You can't use it to evaluate network, software, or
databases or a whole range of things. So what happened is the Orange -- the reason it's called the
Orange book is because the cover was orange -- and they had to come up with a whole bunch of other
books to address all of these other components that are distributed and diverse environment. So
each one of these books they had it a different color and that's where they came up with the
rainbow series and probably for the first handful of colors it made sense, but they had to keep
coming up with more and more books and now they have yellow and neon yellow and light yellow, and
green and light green and dark green so it kind of got out of hand. So the Orange book only was
really developed to evaluate operating systems but we used it for a lot of other things. We sort of
stretched it and tried to use for too many things.
There are other criterias that you need to know about for the exam, IT stack Canadian has their own
evaluation criteria. The federal criteria. Now the up and coming or the criteria that is most used
today is the common criteria. And what the goal is for all these criterias is to try and get one
that would be accepted worldwide. Because for quite sometime and still a little bit today different
countries have their own criterias and that's frustrating especially for vendors trying to sell
their products all over the world and they have to be tested on these different criterias and plus
we want to get all the countries on the same level of information security. So that means our
standards have to work across the different countries.
So the common criteria was developed and seems to be the best candidate for accomplishing these
goals set out. Now the different components of the common criteria would be first it starts off
with a protection profile and what a protection profile is it actually describes the real world
need of some type of product that has not been developed yet. And anybody can basically write a
protection profile. So I can say, I have this security issue, I can't find a product to actually
help me with it, so I'll write a protection profile. Now there are several protection profiles that
are posted and what happens is the vendors will go and look at these protection profiles and make a
determination if they're going to build a product that maps what is in the protection profile. So
what they'll do is choose maybe one and then look at the market analysis and the demand and what's
required to build this product.
So let's say I'm a vendor and I've chosen one of these protection profiles that have been created
and I'm going to build the product to meet it and that is referred to as the [as the toe], the
Targeted Evaluation. So when I create the product it's called the Targeted Evaluation. Now I'm the
vendor, I'm going to write up a security target. The security target explains how my product maps
to the necessary requirements to achieve a certain assurance rating. Because each one of these
evaluation criterias have their own assurance rating and we looked at the Orange book and it has
A-D, IT stack that we didn't look at, but it uses "E's" and "F's", and common criteria we used the
evaluation assurance levels.
So in the security target I'm going to, I'm the vendor I'm going to write up exactly how my product
meets the requirements of the assurance level that I'm going after and how it meets the real world
needs that is outlined in the protection profile. So the protection profile outlines the real world
needs. The product is referred to as the Target Evaluation that's the thing that will be evaluated,
the security target describes how that product provides the level of protection. And then it goes
into the evaluation process and depending on the actual EPL rating that somebody's going after will
depend on the type of test and scrutiny that the testing committee will go through. So it goes
through its testings and then it achieves a specific EPL rating and then it's posted on Evaluated
Products List. You can find these on the Internet for any of the different evaluation criterias. If
you are curious or want to know assurance rating that different products have achieved you can go
look at an EPL list.
Now this domain actually listed a lot of different types of threats and it depends on the actual
resource that your setting for the CISP exam on where these threats and attacks will be. Some are
in one domain, some are in another domain, but here I've but them all, I've clumped a lot of them
together, but you need to know how these attacks work, counter measures for these attacks, and
maybe some actual examples of these. So back doors are not to hard somebody will compromise your
system and they'll install a back door and really what that means is that there is a service that's
listening on a specific port so they can enter your system anytime they want to without you knowing
it and they don't have to go through any access control mechanisms.
Timing attack. There's different types of timing attacks that you actually need to be aware of.
Race conditions. Race conditions have to do with the sequence of processes working on one piece of
data. A race condition is not just an attack, but it's a concern within any type of programming
because process one has to carry out it's instructions before process two does. If process two
carries out it's instructions before process one, the result is different.
So the security concern with race condition is that if I'm an attacker and I can figure out how to
get process two to happen before process one, I control the result and really what that comes down
to is the identification, or the authorization and authentication steps are in two different steps.
So process one would do authentication where I have to be authenticated, step two would be the
authorization step. So authentication, are you who you say you are and then step two, process two,
would be okay go ahead and access the resource you're trying to access. If I can get that second
process to execute before the first process then I could access something and skip the whole
authentication piece, so I don't have to provide the right credentials. Plus for overflows you'll
need to know about buffer overflows means that there's too much data being accepted and you'll need
to understand what balance checking is and how balance checking can be used to ensure that buffer
overflows don't happen. So there's a lot of different attacks that the CISP exam covers that you
need to know, again not just how they work and not just memorize what they mean, but also maybe
some counter measures.
Now since this domain looks mainly at the components within an operating system, covert channels is
definitely covered in this domain. And covert channel just means that somebody is using a channel
for communication in a way that it was not developed. So if we take just a minute and look outside
of the operating system, let's look at an example inside the operating system. We could say that
terrorist cells that are within the United States could be communicating through covert channels.
So an example could be maybe a certain terrorist, everybody in a terrorist cell knows to go to a
certain website, let's say four times a day. They go to this website and they have to check a
certain graphic on the website. If the graphic changes then that means something to that cell and
they know to do something. They know to go on to step two and that's covert in nature just meaning
that you're using something for communication purposes and that's not what that something was
developed for. Overt channels means that you're using communication channels in the proper
way.
So within operating systems there's two main types of covert channels timing and storage. A storage
covert channel means that a process can write some type of data to a shared medium that a second
process can come and read. So if you and I are not supposed to be communicating, let's say that I'm
working at top secret, your working at secret. I should not be writing data down to anybody at the
secret level because I could be sharing information that I'm not supposed to. But if I figure out
how to get my process to write to maybe a page file or some type of resource that we share. I'll
write to it, you're process will come and read it and that would be an example of a covert storage
channel.
Now a covert timing channel is a little bit different, it means that one process modulates the
resources for another process to interpret and you can kind of think of it as Morse code between
process, they can kind of use a type of Morse code. So if figure out how to get the CPU cycles up
for a certain period of time for a certain length and your process watches for that information and
it tells your process something. Or let's say I get my process will write to the hard drive 30
times within 30 seconds, you're processes just watches for that type of information. So those are
just examples and there is a lot of examples of covert channels and this domain covers how software
is supposed to be developed from the ground up.
You start with models depending on the level of protection you're trying to provide, then you go
into the proper design phase, the specification phase, the programming phase how to secure a
program, how to test for a lot of the issues that are in our software today, then the product needs
to go through evaluation. Another thing that CISP exam will cover is the cell phone cloning. It
doesn't necessarily fall into this domain it depends on the resource of where this information,
where this topic is covered, but I'll just quickly go over cell phone cloning has been happening
for a long time. Because you have two numbers on your cell phone. And the ESN number which is like
your serial number which is against the law to actually change. You have the MIN number that's your
phone number that's assigned to your phone. Now most people don't realize these numbers go in clear
text to a base station when you need to make a call and so a lot of people have been able to sniff
these numbers, the ESN and MIN. Somebody can steal your numbers, they're valid numbers and go and
reprogram them into another phone and that's what cloning is. It's not even when you're making
calls, it's anytime your cell phone is on it's communicating to a base station so it's always
sending this data.
Now tumbling's different where you know where you're actually away from home and your going to make
a call what happens is these numbers the ESN and MIN numbers they have to get sent back to your
home station to see if they're valid numbers before a call is allowed but that is an unacceptable
amount of time to make sure that whole process goes from base station to base station to your home
location just to be able to allow you to make a call. So telecommunication companies will allow
that first call to go through even if these numbers aren't valid because it's going to let the
first call go through and then for the second call if they're invalid you can't make that second
call. But what tumbling is for each call you change the ESN and MIN number so for each time that
first call is always going to be able to go through. Now tumbling isn't as effective as it used to
be mainly because telecommunications companies and providers are being able to do that
authentication piece much quicker than before.
So again the crux of this domain is really understanding a computer system from the bottom up.
Looking at the CPU components, looking at the different buses that we didn't talk about, memory
addresses, different types of memory, then getting into how the different types of languages
compiled versus interpreted, looking at the security components within an operating system what
falls within that PCB, the protection mechanisms and it looks at a lot of different models and the
models really tell you how to develop a product to provide a certain level of protection you're
going after. So it goes through the life cycle of the development of a product and into the
evaluation process. So it depends on what criteria is used, but today it's mostly the common
criteria.
So you need to understand the components of the criteria. What these different ratings mean and the
certification and accreditation process and a lot of different attacks that can take place against
the components that have been developed specifically to protect the system overall. So this is a
very good domain to understand kind of from the inside out of how software and operating systems
are built to protect you and applications and your data within them.
Host: Thank you Shon. This concludes Class Four of CISSP Essentials: Mastering the Common Body of
Knowledge, Study Architecture and Model. Be sure to visit www.searchsecurity.com/cisspessentials
for additional class materials based on today's lesson and to register for our next class on
telecommunications and networking. Thanks again to our sponsor and thank you for joining us. Have a
great rest of the day.
Read the full transcript from this video below:
CISSP Essentials training: Domain 2, Access Control
Host: Welcome to Search Security CISSP Essentials, Mastering the
Common Body of Knowledge.
This is the second in a series of ten classes, exploring the fundamental concepts, technologies,
and practices of information systems security corresponding to the CISSP's Common Body of
Knowledge.
In our last class, we explored security management practices. Today's class will examine topics
covered in the second domain of the CBK, Access Control. The cornerstone of information security is
controlling how resources are accessed, so they can be protected from unauthorized modification or
disclosure. The controls that enforce access control can be hardware or software tools, which are
technical, physical, or administrative in nature.
In this class, lecturer Shon Harris will cover identification methods and technologies, biometrics,
authentication models and tools, and more. Shon Harris is a CISSP, NCSE, and president of Logical
Security, a firm specializing in security education and training. Logical Security provides
training for corporations, individuals, government agencies, and many organizations. You can visit
Logical Security at www.logicalsecurity.com.
Shon is also a security consultant, a former engineer in the Air Force's Information Warfare unit,
and an established author. She has authored two best-selling CISSP books, including CISSP
All-in-One Exam Guide, and was a contributing author to the book Hacker's Challenge. Shon is
currently finishing her newest book, Gray Hat Hacking: The Ethical Hacker's Handbook. Thank you for
joining us today, Shon.
Shon Harris: Thank you for having me.
Host: Before we get started, I'd like to point out several resources that supplement this
presentation. On your screen, the first link points to a library of our CISSP Essentials classes,
where you can attend previous classes, and register to attend future classes as they become
available. The second link on your screen allows you to test what you've learned with a helpful
practice quiz on today's material. And finally, you'll find a link to the Class 2 Spotlight, more
detailed information on this domain. And now, were ready to get started. It's all yours,
Shon.
Shon Harris: Thank you. Thank you for joining us today, we are going to go over the Access Control
domain. This is a very large domain for the CISSP exam. It's not as difficult, students don't
usually find it as difficult as the other domains, but it does have a lot of material in it. In
this domain we talk about different access control types, technologies and methods for
authentication and authorization, we'll look quickly at some of the models that are integrated into
applications and operating systems that control access, and how subjects and objects communicate
within the software itself. We also need to understand how to properly administer access to the
company's assets. And then we'll quickly look at intrusion detection systems.
Now, in the last class that we had, I talked about different types of controls, and I said that the
theme throughout all of the domains was in the Common Body of Knowledge, and it's important for you
to not only understand the different types of controls, physical, technical and administrative, but
you'll need to know examples of each kind, and how they apply within the individual domain. Since
we're talking about access control right now, we have listed some of the controls that a company
can put in place, to control either physical or logical access.
Physical controls, of course, can be that you actually have locks, you have security guards, you
have fences, you have sensitive areas that are blocked off that maybe need some type of swipe card
access to it. Technical controls would be what you would think of; it's access control, logical
controls that are built into applications, operating systems, biometrics, encryption. And
administrative controls also come into the play of access control, although most, a lot of people
don't think about it. You need to have a security program that outlines how, the role of security
within your environment, but also who is allowed to access what assets, and the ramifications for
these, if these expectations are not met.
Now even though we have three categories of controls, we also have different characteristics that
individual controls can provide. The individual controls-and when we say controls, it's the same
thing as a countermeasure or a safeguard, it's a mechanism that is put in place to provide some
type of security service-these different controls can provide different types of services.
The control can be preventative, meaning that you're trying to make sure something bad does not
take place. Preventative controls could be developing and implementing a security program,
encrypting data, you encrypt data to try to prevent anybody from reviewing your confidential
information.
Something that provides detective service is something that you would look at after something bad
happened, so maybe a system goes down, you're going to look at the log to try to piece back
together what took place, and try to figure out how to fix the problem. Intrusion detection systems
are detective controls, because they're looking after the fact, maybe after an attack took
place.
Corrective means that something bad has already happened, and you have controls that can fix the
problem and get the environment or the computer back to a state of working. An example of a
corrective control could be, you have antivirus software. Once a file actually gets infected, your
antivirus software, if it's configured to do this, will try to strip the virus out of the infected
file, it's going to try to correct the situation.
There's other types of corrective controls within your operating systems and software. These
entities will save state information. State information is really how the variables are populated
at a certain snapshot in time. Applications and operating systems will save state information, so
if there's some type of a glitch, maybe a power glitch, or maybe there's an operating system
glitch, it will try to correct the situation, and bring you back to the state, and save your
data.
Now, deterrent. Some people have a problem, or don't really understand the difference between
deterrent and preventative. Deterrent is that you're trying to tell the bad guy, "we're protecting
ourselves, we're serious about security, so you need to move on to an easier target." Preventative
means you're trying to prevent something bad from taking place. Deterrent means that you have
something that's actually visible in some way, to tell the possible intruder that they're going to
have to go through some work to actually carry out the damage they want to. And an example is when
people actually put "Beware of Dog" signs up. Some people may not even have a dog, but they're just
trying to tell the bad guy, "go away, because we have some type of security mechanism."
And if you have an access control that provides recovery, and there's different kinds of mechanisms
that can provide different types of recovery; for example, if your data actually got corrupted, you
need to recover the data from some kind of backup. And compensation just means there's alternate
controls that provide similar types of service that you can choose from.
Now you need to be familiar with a combination, like administrative detective, you need to
understand physical detective, technical detective, you also need to know administrative
preventative, technical preventative, physical preventative. And what I mean by this is that you
need to understand, if something is detective and administrative, that it's trying to accomplish
certain types of tasks, and you need to know examples of each kind.
Here we have some examples of detective administrative. In job rotation, a lot of people don't
understand how it's detective in nature, but it is, a very powerful control that companies can put
in place. The security purpose of using job rotation is that if you have one person in a position,
and they are the only ones that carry out that job, they're the only ones who really know how that
job is supposed to be carried out and what they're doing, they could be carrying out fraudulent
activity and nobody else would know. So the company should rotate individuals in and out of
different positions, to possibly uncover fraudulent activity taking place. And this is definitely a
control that's used in financial institutions.
Now, this is different than separation of duties. Separation of duties would be an administrative
preventative control. Separation of duties means that you want to make sure that one entity cannot
carry out a critical task by themselves, so you split that task up, so that these two entities
would have to carry out their piece of the task before the task can be completed, and that's
preventative because you're trying to prevent fraudulent activity from taking place. Job rotation
is detective, because you rotate somebody into a new position, and since they may uncover some of
the fraudulent activity that could've been happening.
So we have examples of detective technical-again that's after the fact, intrusion detection
systems, reviewing logs, forensics and detective physical, physical controls that can be used to
understand what took place, and possibly You should know is that we and now you need to go after
the bad guy, now you need to try to get the environment back up to a working state, or you need to
start collecting evidence for prosecution.
Now, there's different authentication mechanisms that we use today, and they all have one or more
of these characteristics: it's something that you know, something that you have, or something that
you are. And something that you know would be like a PIN number, a password, a passphrase.
Something that you have would be, a memory card, a smart card, a swipe card. And something that you
are is a physical attribute, so something that you are is talking about biometric systems, and
we're going to look at different types of biometric systems in a minute.
Now, most of our authentication systems today just use one of these, and combine it with a username
or a user account number, and that's considered one-factor. Two-factor means that you have two
steps necessary to actually carry out the authentication piece. Two-factor authentication provides
more protection, it's also referred to as strong authentication. So there's different types of
mechanisms to authenticate individuals or subjects before they can actually access subjects or
objects within your environment. These are the ones that you need to know for the CISSP exam.
Biometrics is a type of technology that will review some type of physical attribute, and in
biometrics, what happens is that an account is set up for an individual, and there's an enrollment
period that goes through. So for example, let's say the administrator set up a account for me, and
the biometrics system that we're using is fingerprint, so I will put my finger into a reader. Now
this reader is going to read specific effectors on my finger, and extract that information. That
information from my finger is held into a profile or reference file, and then it's put into a
backend database, just as our passwords and such would be kept in a backend database. When I need
to authenticate, I will put in some type of username or user account number, and then I will put my
finger into the reader. The reader will pull the information, will read the same vectors on my
finger and compare it to the profile that is kept in the database. If there's a match, then I'm
authenticated.
Token devices, there's several different types of token devices. Within the CISSP exam, you need to
understand the difference between synchronous and asynchronous token devices. And these devices
actually create one-time passwords, and one-time passwords provide more protection than a static
password, because you only use it once. If the bad guy sniffs your traffic and uncovers that
password, it's only good for a short window of time. So the token devices, synchronous is time or
event-based, and asynchronous is based on challenge-response. And for the exam, you need to
understand the differences between them, and how they work, and the security aspects of both of
them.
The other authentication mechanisms are memory cards and smart cards, and cryptographic keys. And
cryptographic keys means that you're actually using your private key to prove you are who you say
you are. It's not a public key, and when you go into cryptography you truly understand the
difference between a private key and a public key, and the different security characteristics each
of them provide. But today we use cryptography for a lot of different reasons. If you're using
cryptography for authentication, you're providing your private key. Now, within biometrics, we have
different types of errors that we need to be aware of.
A type one error means that somebody who should be authenticated and authorized to access assets
within the environment, did not happen. So if we experience a type one error, that means that
someone who should've been authenticated, did not. So the scenario I went through, I went through
an enrollment period, the system has a profile on me, I've authenticated, I've gone through the
steps of being authenticated, and it shuts me out, it doesn't let me in. And this can happen with
biometrics, especially if it's a finger reader, if it's a fingerprint reader, then of course if you
have a cut, or you have dirt; voiceprint, if you have a cold, or if there's some type of problem
with your voice. So because biometrics looks at such sensitive information, there is a chance for
more type one errors.
Now type two errors is actually more dangerous than type one errors, it means you're allowing the
imposter, you're allowing somebody to authenticate who should not be able to authenticate. So let's
say Bob, has never gone through an enrollment period, and should not be allowed to access our
company assets, but he goes through the process of authentication, the biometric system lets him
in, and that's a type two error.
Now there's a metric, that's Crossover Error Rate, and this is used, the CER value is used, to
determine the accuracy of different biometric systems. And the definition of the CER value is when
type one equals tape two errors. That means you have just as many type one errors as you have type
two errors. And the reason that we use this metric is because when you get a biometric system, you
calibrate it to meet your needs and your environment. The more sensitive that you actually make
your biometric system, you will have a reduction in type two errors, so you're trying to keep that
bad guy out, but you're going to have an increase in type one errors, meaning that the people who
are supposed to be authenticated are going to be kept out.
So, companies have to do a balance between type one errors and type two errors, because if you have
too many type one errors, people who are supposed to be authenticated or not getting in. So you
will calibrate this device to meet the necessary sensitivity level for your environment. And, let's
say you and I are customers, we're looking at different biometric devices, well how do we know that
one is more accurate than the other? We don't know that, we as customers don't know that. We can go
by the vendor's hype on how great their product is, but what's better is to have biometric systems
that are tested by a third party, that come up with an actual rating, which is a CER rating. So a
CER rating indicates how accurate the biometric system is, so the lower the CER rating, the more
accurate a biometric system is.
Now, there's several different types of biometric systems used today, some of them of course we use
more often than others. Biometrics has been around for a long time, and it really hasn't been that
popular until after 9/11, mainly because the society has kind of a push back to biometrics, because
it seems to get too much into our space, it's too intrusive, we're used to having to provide a PIN
number or password.
So for the CISSP exam, you need to know the difference between all of the biometric system types.
For example, the retina scan will look at the blood vessel patterns behind the eye, iris scan will
look at the color patterns around the pupil, signature dynamics is something that's different than
you actually just signing a check. Signature dynamics is collecting a lot of electrical information
on how you sign your name, how you hold the pen, the pressure that you use, how fast you sign your
name, and the static result, your actual signature. So it's easy to forge somebody's signature, but
it's very difficult to write their name exactly as they write it.
You also need to know the difference between hand topology and geometry. Hand topology is a side
view of the hand, where geometry is the top view of the hand. So in topology, if you had a topology
reader, you would put your hand in, and then there's a camera or a reader off to the side that will
look at the thickness of your hand, you know your knuckles and such, and then geometry as the
reader on top that looks at the whole view of your hand.
Now I said that memory cards and smart cards are used for authentication, and memory cards are very
simplistic, it just means that they have a strip on the back of the card that holds data, that's
all it does, it just holds data. Memory card doesn't have any intelligence, it can't do anything
with the data. A smart card is different, it actually has a microprocessor, it has integrated
circuits, and you have to authenticate to your smart card to actually unlock it, because your smart
card is like a little tiny computer, it can actually do stuff, it can process data. And, when you
look at a smart card, you see that there's a gold seal, that gold seal is the input output channels
that your card will communicate to the reader.
So you put your card into a reader, and not only is it going to communicate to the reader through
that gold seal, the input-output, but it's also going to get its power from the reader. So you put
your card through, you put in your PIN number, if you put it in correctly, you unlock this smart
card. And the smart card could do a lot of things, it depends on how it's coded. It could hold your
private key, if you're working within a PKI so you need to authenticate, it could respond to a
challenge-response, it can create a one-time password, it could hold a lot of your work history
information.
And smart cards are really catching on more today, they've been popular in many different countries
outside of the United States, United States is now kind of catching up, where the military has
changed over for their ID all to smart cards, a lot of credit cards are using smart cards, and they
provide more protection because you have to authenticate to them, and also it's harder to
compromise a smart card.
In fact, some smart cards, if I try to authenticate, like I put my PIN number in incorrectly, let's
say four times, it can actually lock itself, where I have to call the vendor to get an access code
to override that. Some smart cards will actually physically just burn out their own integrated
circuits, so if I maybe do like a brute force attack, I try to authenticate, authenticate,
authenticate, and after I reach a certain threshold, it will just go ahead and burn its own
circuits, so that the physical card cannot be used anymore.
Now in this domain, we also look at single sign-on technologies. And the goal of single sign-on is
that users only have to type in one credential set, and they'll be able to access all of the
resources within the environment they need to carry out their task. This is kind of a big thing in
some markets today, there's a lot of companies that's trying to accomplish single sign-on through
their products. And that helps the user because today, in most environments, users have to have
several different username and passwords, or whatever credential sets, to be able to access
different types of servers, different types of resources.
That also adds to the burden of administration, when we've got all of these different types of
systems and have to keep all of these various user credential sets. So single sign-on, you just log
in once, and you access the resources you need, but it's kind of a utopia that a lot of companies
are going after right now. The difficulty is because you're trying to get a lot of diverse
technologies to be able to understand and treat one credential set the same, and let's say you have
five different flavors of UNIX in your environment, you have different Windows versions, you have
legacy systems, it's not easy to accomplish. So even though there's a lot of different technologies
for single sign-on, these are the ones that will be covered in the CISSP exam that you need to know
about.
Now Kerberos is an authentication protocol that's been around for quite some time. It's been
integrated and used in UNIX environments, and it's catching on more and more today, mainly because
Microsoft has integrated it into its Windows 2000 family. And in fact, Windows 2000 will try to
authenticate you through Kerberos first, and if your system is not configured to be able to do
that, then it'll drop down to another authentication method. Now Kerberos is based on symmetric key
cryptography, which is important. And there's different components within Kerberos, we have the
KDC, Key Distribution Center, and principals and the realms.
Now, a quick overview, is that within an environment, a network that is using Kerberos, what can't
happen is that users and services cannot communicate directly. I cannot communicate directly to a
print server, I have to go through steps to properly authenticate myself. Services cannot
communicate, remote services cannot communicate directly to each other, they have to be
authenticated. So the KDC holds all of the symmetric keys for all of the principals. And just like
with the different technologies, each technology can come up with their own terms.
If you're not familiar with principals or realms, a KDC is responsible for all the services and
users within an environment. All those services and users are referred to as principals. An
environment is referred to as a realm. If you're familiar with the term of domain, within
Microsoft, a domain controller is responsible for one domain, the same type of thing, a KDC is
responsible for one realm, and a realm is made up of principals.
Now the KDC is made up of two main services, the authentication server, and the ticket granting
service. And I'll just quickly walk through the steps that happen within Kerberos. Let's say I come
in to work, and I need to authenticate. So I'm going to send over my username over the KDC,
actually to the authentication service on the KDC. I send over my username, but I do not send over
my password, and that's a good thing, because since the password is not going over the network,
somebody can't grab it and try to use it. So I send over my username, the KDC will look up to see
if it knows Shon Harris, it does know Shon Harris, it's going to send over an initial ticket that's
encrypted. Initial ticket gets to my computer, and my password is converted into a secret
key.
Now, the key, my password and the secret key, is used to decrypt the initial ticket. If I've typed
in the correct password, I can decrypt this initial ticket. If that takes place properly, I am now
authenticated to my local system. If I need to communicate to something outside of myself, here we
have a file server, I have to send an initial ticket over the ticket granting service. And
basically that initial ticket says to the ticket granting service, "Look, I've authenticated, I
need to talk to the file server, create another ticket for me." Ticket granting service will create
a second ticket, called Ticket Granting Ticket, and it'll have two session keys. It comes over to
me, I pull out one of the session keys, and I send it over to the file server, and we both have
session keys now. And I just did a really quick overview, these steps actually get a little bit
deeper, a little bit more complex, and you actually need to understand them for the exam, along
with some of the downfalls of Kerberos itself.
Now large portion of this domain goes into different models, different access control models. And
these models-most people I find, aren't really familiar with where these models come into play-they
are the core of an application or an operating system. When a vendor actually builds an operating
system, actually have to make a decision, before they even write a piece of code, what type of
access control is going to be used within their product. The discretionary access control means
that data owners can choose who can access their files and their data. And this is the model and
environment that we're most used to, because all of Windows, Macintosh, most flavors of UNIX and
Linux work on DAC. And it just means that you can determine who can access what files and
directories within your system.
And you know if you're working on a DAC system, if you're in Windows, you do a right-click, you
look at security properties, and you can see who has read access, who has full control, and you can
choose who has these levels of access. In a Linux environment, if you can do the CHMOD command, and
change the actual attributes on files, you're working in a DAC model because it allows you to make
those decisions. Now, that's different than a MAC model. Mandatory Access Control makes its
decisions, the system will make the decisions, the data owners and the user will not make
decisions.
In the MAC model, the operating system will make decisions based on the clearance of the user, and
the classification of the object. So if I'm trying to access, let's say I have top-secret
clearance, I'm trying to access a file that has secrets, my clearance dominates the classification,
and the operating system will allow me to access that. So, Mandatory Access Control is much more
strict, it doesn't allow users to do anything, it doesn't allow users to make any decision or
configuration changes. And MAC systems are used in more of like government environments, military
environments, where secrets really have to be kept secret.
Role-based just means that you're setting up roles or groups, you assign rights or permissions to
those roles or groups, and then you put users in them, and they inherit those rights.
And even though I'm going very quickly over these, you need to know a lot of this stuff to much
more of a depth that I'm actually covering for the exam. When do you use these type of models, in
what environment, the characteristics of the different models, the mechanisms used in the models,
the restrictions, all of that. Most of us are not familiar with Mandatory Access Control systems
unless we work in that type of environment. The most recent operating system that came out that's
MAC, Mandatory Access Control, is SE Linux, Security Enhanced Linux, that was developed by the NSA
and Secure Computing.
Now, we also need to be concerned about how we're controlling access. So far we've looked at some
controls that we could put in place, the different types of controls, what characteristics those
controls will provide, preventative, corrective, deterrent, and we looked at authentication
mechanisms that will provide either something that you know, something that you have, something
that you are. And we looked at single sign-on technologies. Now we need to look at, how do we
properly administer the control, especially remotely. In today's environment, we have a lot of
remote users that need to access our corporate assets, so how do we deal with that?
And the three main protocols we need to know about are RADIUS, TACACS, and Diameter. Now, RADIUS
has been around for a long time, it's an open protocol. And anytime anything is referred to as
open, it means it's available to vendors to manipulate the code to work in their product. So
different vendors have taken the RADIUS protocol and manipulated it enough to work seamlessly in
their product.
So we have different flavors of RADIUS. Each one of these protocols that we're talking about is
referred to as a triple A protocol, which means it carries out authorization, authentication, and
auditing. And auditing also includes something that most people don't think about, but auditing is
the way for ISPs to keep track of the amount of bandwidth that's being used, so it could charge the
corporation properly. So it's not just auditing on keeping track of what happened, but also used in
billing activities.
So in RADIUS, to walk through a quick scenario, let's say I want to access the Internet, I will
connect to my ISP, and my ISP will go through a handshake of how authentication will take place,
but what I'm communicating to is an access server. This access server is actually the RADIUS
client. I am not a RADIUS client, the access server is a RADIUS client. The RADIUS client
communicates to a RADIUS server. The RADIUS server is the component that has all of the username
and passwords and their different connection parameters.
So I will send my credentials over to the RADIUS client, which is the access server. The RADIUS
client will send that information over to the RADIUS server, the RADIUS server will determine if
I've entered the right credentials or not, and send an accept or decline back to the RADIUS client.
Now the RADIUS server will also send not just an accept or decline, but it could indicate the type
of connection that needs to be set up, how much bandwidth I'm allowed, if I need a VPN setup, etc.
And that's just to get on the Internet, a lot of times when you need to communicate with your
corporate environment, you're going to have to go through a second set of authentication, and a lot
of companies do use RADIUS in that mode also.
Now, TACACS is not an open protocol, TACACS is a proprietary protocol, it's been developed and
owned by Cisco. So, they're not going to allow you to have the code for free, and manipulate it as
you see fit. It works with their products. Now, TACACS has gone through generations, as TACACS,
extended TACACS, and now we're at TACACS+. Even though TACACS and RADIUS basically do the same
thing, there are some differences. The authentication, authorization, and auditing pieces of
TACACS+ are separate. In RADIUS they're all clumped together, you get them all, you don't have a
choice.
TACACS+, the administrator who is configuring and setting this up, can choose what services he or
she actually wants. TACACS also allows the administrator to come up with more detailed, oriented
profiles than RADIUS, meaning that if you have Sally authenticating remotely into the corporate
world, then she can have her own profile on what she can and can't access, which can be different
than Mark's. That's something different. Also, TACACS+ communicates over TCP, which is a reliable
transport protocol, where RADIUS goes over UDP. And, RADIUS does not protect the RADIUS client to
the RADIUS server communication as well as TACACS+ does. Between the RADIUS client and the RADIUS
server, just the user password is encrypted. Between TACACS+, the client and the password, all of
this communication going back and forth, is encrypted.
Now, Diameter is a new protocol that a lot of people don't know about. The goal is that it's
supposed to possibly replace RADIUS. And, the reason that Diameter was created, is that we have a
lot of devices today that need to be able to authenticate differently than the traditional methods.
The traditional methods of remote authentication, at one time happened over SLIP connections, now
it's happening over PPP connections, and uses traditional ways of authenticating, PAP, CHAP, or
EAP. But we have wireless devices, we have smartphones, we have a lot of devices that can't, or
don't, communicate over these types of connections, also don't have the resources to have a full
TCP/IP stack, and maybe need to authenticate different ways.
So Diameter allows for different types of authentication, it really opens the door to the types of
devices that can authenticate to your environment, and how that authentication can take place. Now
I didn't come up with this, they came up with this, whoever created Diameter, they said "Diameter
is twice of RADIUS." If you get it, "radius and diameter," they're saying "Diameter is twice of
RADIUS." So I guess if you come up with a new protocol, you can come up with your own goofy
name.
Now in this domain also, we go through intrusion detection systems. Two basic approaches to IDS is
network-based versus host-based. And network-based means that you have sensors that are in
different locations within your environment, and you have to be aware that whatever type of traffic
you're trying to monitor, that's where you need to have a sensor, and a sensor is either a
dedicated device, or it's a computer running IDS software, and the network interface card is put in
promiscuous mode. Promiscuous mode just means it can look at all the traffic going back and
forth.
So another thing is, where do you actually place that sensor? Do you want it in your DMZ? A lot of
companies put an IDS sensor in their DMZ. Companies have to make a decision on if they're going to
put a sensor in front of their firewall, that is facing an un-trusted environment. If you put a
sensor outside of your environment, in front of your firewall, you're going to get an amazing
amount of traffic, so a lot of companies don't do that, because there's so much junk, there's so
much traffic. Some companies who require a higher level of protection will put a sensor outside the
firewall to find out who's knocking, and start to gather statistics on trying to be able to predict
the type of attacks that may take place. Because there's certain things, certain pings, sweeps, and
probes, and activities that you can determine, "Okay, this is what the person's after, this is the
type of attack we need to be prepared for."
So a network-based is different than a host-based, a host-based just means that you've got some
type of software that's on an individual system, and that's its world, it doesn't understand
network traffic, it doesn't care about network traffic, it just cares about what's going on within
that single computer. And a host-based IDS would be looking at user activity, what's trying to
access system and configuration files, what types of access are coming in from the outside, and
trying to determine any type of malicious activity.
We can also split the IDSes up into signature and behavioral-based. Now, signature-based is
basically the same type of idea as antivirus software. Antivirus software has signatures that are
just patterns to try to map to an actual virus. In IDS, we have signatures which are patterns that
the IDS uses to be able to determine if there's a certain attack going on. For example, there's an
attack called the LAND attack; in the LAND attack, the source and destination IP address and port
are the same, because in a packet, the source and destination IP addresses have to be different. I
send you a packet, my source IP address, and your destination IP address. Well, if I'm a bad guy, I
can manipulate that header, and have the source and destination address to be yours, and if your
system is vulnerable to a LAND attack, that just means your TCP/IP stack has no idea how to
interpret that. And that's an example of a signature.
So if a IDS actually has that signature, and it finds a packet that has the same source and
destination address, it will then alert that there's this type of attack going on.
There's other types of attacks that could be identified this way; XNIS, which means all the flags
within headers are turned on, a lot of different types of fragment attacks is where the offsets
within the headers are incorrect. And signature is different than behavioral-based. A
behavioral-based system will learn about what is considered normal in your environment. And really
how IDSes started was behavioral-based, behavioral host-based systems. They were developed and used
within the military, the military was not as concerned about attacks, but concerned about users
misusing their resources. But we've extended behavioral-based to look for attacks in network-based
systems.
So behavioral-based, what'll happen is, you'll install this IDS in your environment, and it goes
through a learning mode, it learns on what are the normal activities for your environment, the user
activities, the types of protocols that are used, how much UDP traffic is used compared to TCP
traffic. And after this learning mode, which is usually a couple of weeks, then you have a profile.
The profile is used to compare the rest of your traffic, compared to this profile. Anything that
does not match the profile is considered an attack. Now, one benefit of a behavioral-based is that
it can detect new attacks, because it's just something out of the normal, out of norm. So
behavioral can detect new attacks, where signature-based can only detect attacks that have been
identified, and signatures were written about them.
Now, two types of behavioral-based IDS is statistical and anomaly-based. And statistical means that
you have a certain threshold set, and you as a network administrator would do this configuration.
Because in our example here, you may have ten FTP requests over a ten minute period, and that's
fine, there could be that many requests within your whole environment, that's not a problem. But if
you have fifty FTP requests within ten minutes, that is out of the norm. That means that something
actually is going on, somebody's trying to break into some type of FTP servers that you have. So
this is just one example, you've several different statistical-based thresholds to be able to
determine if an attack's going on.
And an anomaly-based just means that again, something does not match the historical values that
were set before your IDSes learned what's normal, and if something is out of the norm, it's
considered an anomaly. Now what's important is that when you put your IDS in this learning mode,
that you make sure that your environment is pristine, because companies have put their IDS in
learning mode and have been under attack. If that's true, then the IDS thinks that's normal, and
will incorporate that into the profile.
Now, this is pretty quick, I'm going through a very large domain, there's a lot of things that I
didn't actually cover, that's covered in the Access Control domain. But the crux of this domain is
understanding the difference between identification, authorization, authentication, the
technologies and methods for each one of those, how they interact, the mechanisms that can provide
these; mainly authentication is looked at. And we looked at the cryptographic keys, the memory
cards, the smart cards.
We didn't go into passwords and passphrases, virtual passwords, but those are components you use
for authentication. Single sign-on, you actually need to know specific characteristics about
SESAME, and thin clients, and a lot more about Kerberos than I covered. The different models are
hit pretty hard, DAC, MAC, role-based, rule-based, and again, into intrusion detection systems, we
also go into this domain. We look at different types of attacks that can happen against the
different methods and technologies that are addressed in this domain. So, this is just a quick
one-hour look at this domain, it's a very large domain, and very important for corporations to
understand how to properly control access to the assets that they're needing to protect.
Host: Thank you, Shon. This concludes class two of CISSP Essentials, Mastering the Common Body of
Knowledge: Access Control. Be sure to visit www.searchsecurity.com/CISSPEssentials, for additional
class materials based on today's lesson, and to register for our next class, on cryptography.
Thanks again to our sponsor, and thank you for joining us, have a great rest of the day.
This was first published in September 2008
Security Management Strategies for the CIO
Join the conversationComment
Share
Comments
Results
Contribute to the conversation