BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
You have to learn to crawl before you can walk. Learned early in life, this lesson is also applicable to CISSP candidates taking their first tentative steps into cybersecurity engineering. To help you get your bearings and prepare for Domain 3 of the CISSP exam, expert Adam Gordon, lead editor of the Official (ISC)² Guide to the CBK, Fourth Edition, provides in this video a high-level look at how various information systems interface with security. While security engineering jobs typically don’t require infosec pros to know each branch of IT inside and out, Gordon explains that it is necessary to have a broad, working understanding of different systems and how they interact in order to protect them.
CISSP® is a registered mark of (ISC)².
The following video is an excerpt from the Official (ISC)² CISSP OnDemand Training.
Transcript - Cybersecurity engineering: CISSP demands broad IT knowledge
Let's continue our conversations now. We'll focus on understanding the security capabilities of information systems. The topics that will be discussed in this area, you can see them on the screen in front of you. Access control mechanisms, secure memory management, processor states and process isolation, data hiding, abstraction and cryptographic protections, host firewalls and intrusion prevention systems, auditing and monitoring controls, and, of course, a little dabbling in virtualization. What would be complete as a topic without having virtualization?
Access control mechanisms
So access control mechanisms, what are they? Obviously, all systems need to have some form of access control, the ability to understand how to regulate who gets to see what, what subjects get to interact with what objects is what we're talking about, under what conditions, through what channels, and under what particular or with what particular mechanisms. We can distinguish between subjects and objects, assign identifiers to both so we can regulate the interactions between them, track and authenticate all subjects and their access to objects, and make appropriate decisions concerning access control. This is what access control mechanisms allow us to do.
We often think about this idea of what's known as complete mediation and relation to access control. When no subject can gain access to any object without authorization, we are said to have a complete mediation solution. In other words, any user or process that wants to see any piece of data can only do so if they go through an access control mechanism, and the system will record their place they're coming from, who they are, what they're up to, what they mean to do, and how they do it. In cybersecurity engineering, that is complete mediation.
The security kernel normally implements this through what's known as the reference monitor. The reference monitor is the logical abstraction that the security kernel, the security function of the operating system, implements the reference monitor is going to be implemented by the security kernel. It's what it implements in order to allow for the complete mediation solution to take place. A reference monitor examines any and all attempts, as you can see, by subjects to access objects that determine if it should or should not, meaning the access should or should not be allowed. The reference monitor is the crucible we examine all interactions between subjects and objects through. It is created logically through the security kernel inside of the operating system.
Secure memory management, processor states and processor isolation
Secure memory management, as we've talked about, it's going to be very, very important in any computing system. The idea of being able to segment areas of memory and assign them to an individual process to allow them to be able to have restricted access to them, but, to others, not so much, so being able to page, being able to protect the memory area is very important. And secure memory management is reminding us of the thought process that we've already spoken about. We've talked about processors and the ways in which processors work, the fetching, the decoding, the executing, the storing of the four phases, four activities a processor engages in. Processor states provide one of the very first layers of defense with regard to cybersecurity engineering, in our defense in depth in architectural models.
Around system defense, we have specialized security functions, like cryptographic coprocessors that we mentioned in one of our prior conversations in this domain, that help us to understand how to focus security functions in the processor and isolate them in certain areas. We have states that could distinguish between more or less privileged instructions and understand that we should then allow certain access to those isolated functions only if certain states or privileges are being used properly. We have at least two states. I've mentioned them already in a couple of our conversational areas, what's known as supervisor and problem, or as you could see in the parentheses next to the statements at the bottom of the page.
Supervisory state is commonly referred to as kernel mode. The problem state is commonly referred to as user mode. You can reflect on the irony of problem mode being user mode if you would like. Most people do, but the idea is very straightforward, as I've mentioned, which is that kernel mode or supervisory mode has a much less restricted view of the world, much more ability to interact with the processor at a deeper level with almost any if all functions being made available to it, whereas problem or user mode is a restricted view of the world. It is where the user and the software installed in the operating system are residing. It is where they interact. They are going to effectively push their request down into the supervisor mode to be executed by the kernel mode requirement or the kernel mode operators, but to do so in a restricted pathway in such a way that we validate that request that it's coming from an authorized processor or program with the appropriate ID, an authorized user with the appropriate credential, whatever that may be, before we execute it.
Layering allows us to be able to separate functional components and ascribe interaction and functionality to those layers, segmenting in such a way and therefore process isolating and tracking, creating auditability, traceability, transaction integrity, and all the things we talk about to help us to ensure that sensitive areas of the system are protected from unauthorized access or change. We don't want users directly accessing the CPU. We want them to go through an abstraction layer to get into the kernel mode to help us to do that, to process, isolate, and to provide this layering. The HAL, or hardware abstraction layer, is a great example of layering as a technique, and a terminology item, and a technology implementation item in modern operating systems. Hardware abstraction layers allow the user mode to be separated from the hardware and the kernel mode and to allow those two to interact, but to go through some sort of layered buffer that controls interaction and only allows it if certain requirements or procedures are followed or met.
Process isolation is used to prevent individual processes from running over, overlapping, and interacting with each other. I want to make sure we're aware of that. We provide, as we've already discussed, distinct address spaces for the execution of the memory requirements of the process. We may space those out. We may randomly lay them out and then map them in some sort of table. We may just go ahead and specify that this process can access this memory here and here, but not over here. There's different ways to do that in cybersecurity engineering, but ultimately we're going to isolate processes executing in memory from one another to ensure integrity and therefore to protect the information that is in that process from other processes modifying or somehow accessing it.
Data hiding allows us to be able to separate levels of activity from each other. We can effectively screen data that exists in one level in the system from data at other levels, preventing processes from seeing lower level or higher level data as a result. We got another cybersecurity engineering mechanism that the operating system and the kernel architecture can employ. Any or all of these things can be implemented through the operating system, and in combination with the operating system and the CPU architecture, we can achieve these end results. Remember that data hiding will allow us to be able to make sure that security is implemented at all the different levels of process execution and that we don't expose data at a different level just because the process is going to be executing there. We only allow the data in question to be seen at levels that are appropriate for the process to access it from.
By the way, and storage systems, if you know anything about storage in virtualization, you can think of data hiding as something that's commonly referred to as masking, where we will only allow LANs to be presented to people that provide the appropriate credential and have a requirement to see them. This is done in multi-tenant hosting environments where ISPs, internet service providers, that are providing cloud-based services, for instance, will provide the ability to mount storage and a common backend storage array, but then give multiple customers access to that array, isolating their individual learner LANs that they're paying for from everybody else using the masking concept. Data hiding is a very similar approach, but done individually within the individual computer using the operating system, the kernel mode, and the access to the CPU to drive that in tandem with each other. Very similar concept, but scaled out to an enterprise storage-based cloud solution, we will call that masking.
Abstraction and cryptographic protections
In the context of cybersecurity engineering, abstraction involves removal of characters from an entity to easily represent its essential properties. Effectively, we remove all the information that's not very specific, that doesn't need to be there, that is not pertinent to whatever the particular solution is. We represent a distilled, broken-down version of it in a summary form without all the supporting character and detail that we need normally if we want to read a full description. Think of an abstraction like an executive summary. When you have an executive summary in a document, you're distilling down the essence of the document into a couple of paragraph statements, where the document may be 30, 40, 100, or 200 pages in length. It negates the need for users to know the particulars of how an object functions, and rather just focuses us on the ability to understand the high-level understanding of or a description of what that particular piece of data or that request requires. We don't need to get lost in the weeds, so to speak, and lost in the detail. This is what abstraction represents.
Cryptographic protections can be used in a variety of ways to obviously implement encryption, and to implement cryptography means to protect the confidentiality of the data and to do so with stringent requirements and cybersecurity engineering measures that will carry through the lifetime of that data within the system. So we want to make sure we are aware of that.
Host firewalls and intrusion prevention systems
Host-based firewalls and intrusion prevention systems, what are commonly referred to as IPS systems, are going to allow us to basically create border or perimeter network segmentation solutions. We put them out on the border of our network, and we allow them to monitor traffic inbound and outbound to create effective border crossing or a place where we can inspect traffic and perhaps take action, not just passively look at it. But, in the case of IPSs, they actually can take active responses or throw active responses out there to redirect traffic flows, shut off IP addresses, block them, things of that nature, whereas an IDS, an intrusion detection system, is seen as being a passive monitoring solution, really just logging traffic flows and noting things that may be of interest and alerts, but not really being able to take retaliatory action.
IPSs are that next generation of network-based cybersecurity engineering, now a few years old at this point, but the idea is that they can take action, not just passively monitor. The more recent interpretation of these devices, the IDPS joining the two together into a single solution, allows us to have the benefit of both monitoring and logging, as well as real-time reactionary capabilities. The idea is that they're used to protect individual hosts and/or groups of hosts in network areas from attack, and these are going to be devices that are often deployed on the perimeter or the border and gateway of our systems. And we just want to on a high level be aware of them, and we start to think about them, much like we're, you know, introducing and thinking about the concept of cryptography in the prior screen, which will come up again in much more detail in one of our future conversations. Want to begin to seed some of those ideas, sprinkle them around so they start to take root and grow, and as you hear them again later on, you begin to understand what they are.
Auditing and monitoring controls
Auditing and monitoring controls help us to obviously understand what systems are up to, what they are doing, be able to keep track of them, be able to have traceability, integrity, and transactional integrity around what's going on or understanding what is happening where, from what direction, under what guise, with what control, etc. This is what auditing and monitoring is gonna be all about. And this is what we're thinking about. We're using this in logging situations typically as we are creating logging entries and logging submissions from one or more applications that are running, one or more processes. We're gathering large volumes of data. We can then audit for the ability to understand if a transaction completed correctly. We can audit to see whether or not something was successful or failed, so all these different ways we can interpret this information. But, obviously, the more focused we are on creating integrity, and providing confidentiality, and ensuring availability, the more auditing and monitoring controls we may wanna think about putting in place.
When we think about virtualization and its implications for cybersecurity engineering, we also want to think about again at a high level introducing concepts here. The fact that virtual machines are going to be running in their own little isolated sandbox environment. The beautiful thing about virtualization as a technology is that we can group multiple logical guest operating systems, commonly referred to as virtual machines, together on one physical or a group of physical hosts. And, as a result of that, we are then able to go ahead, and we're able to run them on that host. And, as a result of running them there, we then can isolate what they do and how they work. The applications that run inside the virtual machines are running on that virtual instance. They are potentially networked, created, and, of course, connected to other systems if we allow them to be, but the resources that that virtual machine is consuming from the host are dedicated to it. And we can shut down, remove, start up, and replace those virtual machines at will very quickly and very easily to allow for the deployment of infrastructure to scale up and scale out on demand.
Virtualization is one of the underlying key technologies that makes cloud computing really what it has become today, which is a scalable, robust, cost-effective platform that scales up and scales out on demand. It's virtualization that underlies cloud technologies that really allows cloud computing to be the force that it has become in the modern computing environments we think of today.
As we wrap up our conversations in this area of cybersecurity engineering, we've introduced a lot of topics in one or two statements, a couple of minutes of discussion in some cases. We'll be revisiting most of these introductory topics again in depth in upcoming conversations in this domain and across other domains. As we get into operational discussions later on, we'll be seeing a lot of the discussions around firewalls, virtualization, cloud computing, and IDS, IPS systems, for instance. We'll be taking on cryptography in one of the later conversations in the security engineering domain as we continue on here, and a lot of these topics will continue to pop up again and again in other areas.
So keep in mind that, just because we talk about something very short and a very, very quick introduction to one area, it doesn't mean it's not important. It simply means we're previewing it in many cases and we will revisit that technology, revisit that discussion, building depth, and clarity, and focus as we go. It is up to you as a CISSP candidate to understand the value of the information in our discussions, to apply it to your systems as needed in the real world to make them more secure, but also to extract the knowledge necessary to qualify to take the exam and be successful.
In other words, you have to study, you have to think about how you apply the knowledge in the real world. Go out and do but also go out and study so that you can answer is really what we're trying to make sure you're aware of. And studying the cybersecurity engineering topics that we're discussing, putting stress on the ones that have a lot of material to support them and understanding that those discussions that we spend a lot of time on will be valuable for you in helping you to review, and obviously focusing your attention on those areas will prove to be helpful for you as you look to prepare for the exam.