Balancing security and performance: Protecting layer 7 on the network

Balancing security and performance: Protecting layer 7 on the network

Balancing security and performance: Protecting layer 7 on the network

Date: May 21, 2009

According to a recent SearchSecurity.com survey of nearly 900 IT professionals, 80% of networking and security pros are concerned about application layer threats. In this lesson, application security expert Michael Cobb offers an overview of how the network can be used to secure the application layer.

About the speaker:
Michael Cobb is the founder and managing director of Cobweb Applications Ltd.

Read the full transcript from this video below:  

Balancing security and performance: Protecting layer 7

Eric Parizo: Hello and welcome to “Balancing Security and Performance, Protecting Layer Seven on the Network.” Our speaker today, Michael Cobb is a CISSP ISSAP in a renowned security office. He is the founder and managing director of Cobb Web Applications a U.K. based consulting firm that offers IT training and support in data security and analysis. He co-authored the book “IIS Security”, has written numerous technical articles for leading IT publications including searchsecurity.com. Thank you for joining us today Michael.

Michael Cobb: Hi Eric, it’s great to be here, thanks very much, and welcome everyone to “Balancing Security and Performance, Protecting Layer Seven on the Network”.

To begin, let’s have a quick look at what I’m going to be covering today. I think it’s important that we start with a look at why we need to protect Layer Seven, the application layer of the open system inter-connection or OSI model, and how securing the other layers of the OSI model play a key role in ensuring your applications have defense in depth.

I then want to look at six key network security components, what they do and how to go about choosing the right ones for your network. This will also include a look at how to balance security with network performance.

Having covered the hardware side of your defenses, I also want to cover application development. Secure application development is another element in your defense of Layer Seven because no amount of hardware can protect a poorly developed application.

Finally, I’ll put what we’ve covered into the context of your security policy.

So let’s get going and move onto the next slide, “Why Protecting Layer Seven is so Important”.  Why do we need to protect Layer Seven? Why do we have to worry about application layer attacks? Well Gartner estimates that 75 percent of attacks now take place at the application layer. This is backed up by Symantec who say that the majority of recent vulnerabilities affect web applications.

So where are hackers finding these vulnerabilities? Data from the common vulnerabilities and exposures project, shows that the most reported security issues in 2006 were flaws in web software. These flaws can be in specific products. An obvious example here would be vulnerabilities found in popular browsers, but what I encounter the most, are flaws in the design and use of an application. The home or in-house built application.

All these vulnerabilities are attracting the attention of amateur and professional hackers alike. By moving up the network protocol stack and attacking at the application layer, attackers can interface directly with an application’s processes via these vulnerabilities, without actually having to compromise the operating system, or even trying to have to evade a firewall first.

Also, applications have system level privileges so if an attacker can take control of an application they gain system level privileges too. These attacks can lead attackers directly to a jackpot of personal or sensitive data.

One problem I have when discussing application security, is getting people to understand what they’re actually up against and that it could, and probably will, happen to them.

Hackers are becoming very sophisticated and well organized. For example, hackers collaborate. One well known web defacement, included credits of 24 hackers, what a great example of network collaboration. Hackers don’t have inter-office politics to contend with either, and they also have a lot of time. They don’t have other distractions like meetings and reports to deal with, like a typical network administrator does.

Hackers also have plenty of resources and now, because the rewards are so big, hackers are working with organized crime. This shift to well funded professional attacks is reflected in the fact that 80 percent of all threats now are designed to steal personal information from consumers, intellectual property from corporations or to control end-user machines.

Just to emphasize this point and leave you in no doubt as to the seriousness of this problem and the need to protect Layer 7, I have two slides kindly lent to me by Ed Amoroso, the CSO at AT&T. This first slide shows a snapshot taken at random of the traffic passing through AT&T. Now AT&T probably handles around 10 percent of the world’s network traffic, so these figures should be multiplied by 10 to get an idea of the global problem represented by these threats.

If we take the file-share probes, which is shown on the second line, the figures here indicate that globally there are almost a quarter of a million unique PCs probing for file-share vulnerabilities and that’s just in one day. Over the course of the day this amounts to around 110 million probes just on the AT& T network alone, or extrapolated out, over 1 billion worldwide.

I think it’s the various bots such as the Sasser Korgo Worms are even more chilling. Believe me, if you have any type of web application big or small, simple or complex, you really are up against it.    

Well, how can hackers generate so much traffic? Well, it has been estimated up to one quarter of all personal computers connected to the internet have been compromised and are running programs usually referred to as ‘worms’, ‘Trojan horses’ or ‘back doors’ and they run under a common command and control infrastructure. Many of these machines have been compromised via application layer attacks such as buffer overflows and then they are used to look for and attack other vulnerable machines. These compromised machines are run as bots, short for robots, and are part of what’s called a ‘bot net’.

This next slide, again from AT&T shows the extent of bots at present on the internet. As you can see the statistics are quite frightening. And other frightening stats about bot nets include a 10,000 node bot net recently found in Norway, and a one and a half million node bot net found by the Dutch police last year. All these bots looking for ways to enter your network via layer seven.

Now we’ve looked at what you’re up against, let’s concentrate on how you can build an acceptable level protection against application layer attacks. The open system interconnection or OSI model, is basically an abstract description of how network protocols work, so that different types of systems can communicate with each other.

So, for example your windows PC can request Google’s Linux machines to perform a search. If we use a Google search as an example, if you were to type in the word ‘network security’ and click the Google search button your request would be handled by each of the seven layers shown on the screen, all the way down from layer seven to layer one, until it was ready to be sent over the internet to Google.

Once you’ve reached the Google machine it will be processed up through the layers one to seven until the instruction “search for network security” is processed by the Google search engine application.

 Layers one and two are normally protected by switches so the first attacks against network systems come in at layers three and four, network and transport. These sorts of attacks include things like IP spoofing, wormhole and routing attacks. We now protect these layers with firewalls and intrusion detection systems.

However, packet filtering firewalls only work at layer seven. Sorry, excuse me, layer three, and stateful inspection firewalls only work at layer four and below, so they can’t see what’s actually going on at the application layer, the point at which your instruction to search for ‘network security‘ is actually delivered to the Google application.

These traditional firewalls lack the ability to consider such application layer commands. They can’t check and analyze your request  to search for ‘network security‘ for example. This means they can’t decide whether such a request is genuine or malicious. This means that malicious code can travel over internet protocols masquerading as normal application content and reach your applications via level seven.

Additional perimeter defense technologies are no longer adequate. This is why we need to protect and secure the application layer, because security is required at every layer possible and this is best done using a variety of defenses and requires an holistic approach. Hardware, software and policies all need to be combined to provide a robust defense.

Let’s start then by looking at some of the hardware devices that you can deploy to provide protection.

My top six key network security components would be routers, switches, application layer firewalls, VPN concentrators, intrusion detection and intrusion prevention sensors, and finally host based intrusion prevention systems.  I’m going to look quickly at each in turn as to what role they play and what security they provide.

A router is going to be the main access point to your network from the outside. Routers work at layer three and have the ability to perform IP packet filtering. They can enforce access control lists to permit or deny TCP and UDP traffic, based on the source and destination address, as well as on the port numbers contained in a packet.

While firewalls are capable of more in depth inspection, strategically placed routers can increase network security. For example, access control lists can be used onboard routers to drop obviously unwanted traffic, removing that burden from border firewalls. This allows your devices to process and secure even more traffic. So always look to leverage the abilities of each device.      

Next on the list is switches. There is no better device to offer initial protection to your network at the physical and datalink layers than a LAN switch. A LAN switch is typically a user’s first point of connectivity to your corporate network and as a result it can offer a point of security. For example, MAC addresses are unique for every network interface card and switches can be configured to allow only specific MAC addresses to send traffic through a specific port on a switch. This function is known as ‘port security’ and it is useful where physical access over the network port cannot be relied upon.

Switches can also be used to create virtual local area networks which can be used to further segment LANS.

Next is application layer firewalls. This is going to be the real meat in your network defenses. The switches and routers will provide protection up to layer four, but you need layer seven protection to really protect your applications, as I’ve just been showing.

This is where the firewall comes in. Now a firewall’s basic task is to control traffic between computer networks with different zones of trust. A firewall’s main function is not to route traffic on the network layer. But traffic will stop at the firewall and if it passes the firewall rules, the firewall will then initiate its own connections to let it carry on its way.

Application layer firewalls provide deep packet inspection. That is, they can analyze and make decisions based on what is contained in the application layer. This context aware inspection of in flight traffic is a must have feature nowadays for any network.

The downside, however, to application layer firewalls is that they require processing power, but with the introduction of more 64 bit multi-core machines, this is becoming less of an issue.

Now, my next network device is a VPN concentrator. VPN concentrators are built specifically for creating a remote access or site to site virtual private network, or VPN. They have one leg connected to the public network and one leg connected to the private network and ideally are employed where the requirement is for a single device to handle a very large number of VPN tunnels.

Many enterprise class VPN concentrators also provide load balancing and fail over too. So why do you need one as part of your network defenses? Well, remote users are notoriously insecure, and a VPN concentrator will allow secure remote access via the internet to your network. You will, of course, still need to ensure the remote user machines are patched and have got up to date anti-virus and anti-spyware signatures. But by using a VPN you can protect network traffic as it crosses the internet. Also Virtual Private Networks are a great way to secure a wireless network.

Next up is intrusion detection and intrusion prevention sensors. An intrusion detection sensor is used to detect all types of malicious network traffic and computer usage that can’t be detected by a conventional firewall. This includes such things as network attacks against vulnerable services, data driven attacks on applications, host based attacks such as privilege escalation or unauthorized log ins and access to sensitive files.

An IDS is composed of several components. The sensors will generate security events, while a console will monitor the events and alerts and control the sensors. It normally has a central engine that records these events, logs files into a database and it uses a system of rules to generate alerts for security events that it receives.

In many simple IDS implementations, all three components are combined into a single device or appliance. In a passive system the intrusion detection sensor detects a potential security breach, logs the information and signals alerts on the console. In a reactive system, which also tends to be called an intrusion prevention system, the IDS responds to the suspicious activity by actually resetting the connection, or by reprogramming the firewall to block the network traffic from the suspected malicious source. This can happen automatically, or at the command of an operator. 

As they both relate to network security, an IDS differs from a firewall in that a firewall looks outwardly for intrusions in order to stop them from actually happening. Firewalls limit access between networks to prevent intrusion and do not necessarily signal attacks from inside the network.

An IDS evaluates a suspected intrusion once it has taken place, and then signals an alarm.  An IDS also can watch for attacks that originate from within a system.  IDSs traditionally achieve this by examining network communications identifying heuristics and patterns of common computer attacks and taking action then to alert an operator.

 Any system which like this terminates a connection is called an intrusion prevention system and really is another form of application layer firewall. As you will see as we go through this presentation, nowadays there are a lot of hybrid devices that incorporate various capabilities of the different network security devices.

Intrusion prevention technology is considered by some to be an extension of intrusion detection technology but it really is actually another form of access control like an application layer firewall.  As I was saying the latest generation of firewalls leverage existing deep packet inspection engines, by sharing this functionality with intrusion prevention capabilities.

Finally a host based intrusion prevention system is an intrusion detection system that focuses its monitoring and analysis on the internals of a computing system, rather than on its external interfaces such as network intrusion systems would do.

A host based intrusion prevention system will monitor all parts of the dynamic behavior and of the state of the computer system. Much of a network intrusion system will dynamically inspect network packets, whereas a host based intrusion prevention system will detect which program accesses what resources and ensure that a word processor, for example, hasn’t suddenly and inexplicably started modifying a system password database.

Similarly, a host based intrusion prevention system might look at the state of the system, its stored information, whether in RAM or in the file system or elsewhere and check the contents of these appear as expected.

You can think of a host based intrusion prevention system as an agent that monitors whether anything or anyone internal or external has circumvented the security policy that the operating system is trying to enforce.

Deployment of these devices should be given to sort of staff who have open access to download information from the internet as well as those staff that are mobile, or working away from the office.

So there are my top six network security devices. So let’s have a look at how you’d go about choosing which ones to use.

Well while gateway appliances offer efficient and sophisticated threat management, there is no one appliance that provides a magic bullet to cover all possible risks. So sadly there is no obvious best solution for choosing your gateway security devices.

So to determine which gateway security devices are best for you, you need to first determine what type of risks do you want to mitigate at the edge of your network. For example, if you already have a viable solution in place to protect against viruses and spam and spyware, and are happy with that performance, you may want to focus on reducing web based risks via port 80 on your web server.

Your objectives and requirements should be laid out in your corporate security policy and this security policy will define what you need and how you would like to secure your network. When looking at threat mitigation devices, review which types of threat they safeguard you against. Some will actually provide safeguards against multiple risks, such as viruses, spyware and malware.

Remember that it is important to implement security at every possible layer so look at the different devices you can choose from and look at what layer they actually protect. Pay close attention to the depth of coverage and the technical approaches that each vendor uses to provide coverage of one or more security areas.

Performance and scalability will also be key. Some devices have limits as to how many email messages they can scan per hour, for example. Other appliances may have networking limitations or only provide capability to protect a narrow range of application protocols.

Always try to look at security throughput and cost and balance these against your security requirements.  So, once you’ve chosen your security devices, you get to the point where you actually are going to come to instal them on the network.  Before any device is connected to your network, you need to ensure that you have hardened it.

This means applying patches as well as taking time to configure the device for increased security. Be sure to reference your security policy during configuration to ensure that each device is set up to do the job intended.

You must, of course, document the changes you make to your network infrastructure, for future reference and trouble-shooting. This involves tracking any changes made to their configuration now and in the future and to ensure that configurations aren’t changed unintentionally or with due process.

You must also control physical as well as logical access to your network security devices. Installation needs to follow the four step security life-cycle, secure, monitor, test and improve. And remember, this is a continuous process that when followed through to completion actually loops back on itself in a constant cycle of protection. 

Once you have installed your network security devices it is vital that you monitor what’s actually going on on your network. Network behavior analysis monitors traffic and analyzes it for security purposes. In order to do this, you first must perform a benchmark of normal traffic behavior and then continuously monitor it for any changes.

The reason for this is if, for example, a relatively unused host begins to propagate thousands of requests, you will know to investigate because the host has probably fallen victim to some form of worm. Or if enterprise and application traffic deemed context sensitive starts to use port 80 compliance policies could be in the process of being breached.

Network behavior analysis performs both compliance and security management roles. In fact tools for monitoring traffic for potential breaches is becoming a staple in most security managers’ arsenal. According to Gartner, by the end of this year, 25 percent of larger enterprises will be employing such tools as part of their network security strategy. 

Now you’re at a point where you have installed various network devices to provide comprehensive security for the network communications layers and you’re monitoring your network for any unusual behavior. But is your network working? That is, how well is it actually serving its users? Have you created bottlenecks by installing these various security devices?

Remember that the aim of security is confidentiality, integrity and availability. This is where network performance management comes in. It includes the processes of quantifying, measuring, reporting and controlling the responsiveness, availability and utilization of the different network components.

It is important to emphasize here network performance has to be measured end to end. What truly matters is how well the performance is perceived by your end users. In other words, the performance of the network as a whole. The performance of each of the individual network components while obviously important is less critical to measure than the actual end to end result.

A key security issue with regard to network management is which users are using the most resources and what types of data they are sending. As with network behavior analysis, intelligent management and network performance information requires historical information on such things as network traffic analysis, protocol analysis and throughput analysis to be able to identify trends and deviations from baseline.

If you have used switches intelligently within your network topology, you can collect a wealth of information on use and performance of each of the individual LAN segments in the network. But to view beyond LAN segments and to get an internetwork enterprise view of network traffic, you need to use tools based on SNMP and RMON2. RMON2 monitors traffic at the higher protocol level and can record who is talking to whom on the network and what applications they’re using. This helps establish policies regarding the proper use of the network.  

Since most network managers follow the simple rule of ‘better safe than sorry’, quite often we find over-engineered and obviously over-priced network infrastructures. However, network performance management can be used today to help network managers to manage performance and capacity planning.

One of the most difficult problems you’ll have in building your defense is in balancing  security with a system users requirement. For example, blocking all incoming email attachments is certainly the simplest way and the most secure way to stop email borne worms, but it’s probably not plausible from a business perspective. Also, because resources differ in the criticality of the data they control and the likelihood of being attacked, a layered defense is required to balance protection, cost and performance. 

Check whether you can easily improve system configurations before adding additional resources such as RAM or hardware accelerators. It may just be a poorly tuned back-end database that is slow in returning data or inappropriate settings for your SSL session cache and time out. If you make changes to your network such as adding a VPN service to a router or firewall, review whether existing equipment has the capacity to handle the additional workload.

Rather than buying bigger and more expensive web servers many enterprises need only purchase load balancing equipment. In a nutshell, load balancing divides work between two or more computers. Content switches are typically used for load balancing among groups of servers.    Content switches can often also be used to perform standard operations such as SSL encryption and decryption to reduce the load on the servers receiving the traffic and de-centralize the management of digital certificates.

However, do make sure your current network is configured correctly before adding load balancing to the mix and if you do use load balancing, make sure that the hardware that it is running on is hardened in the same way as any other network device connected to your network.

Let’s move on now to how you can reduce the number of vulnerabilities that hackers can actually exploit in the applications that you are running.  A comprehensive approach to secure software development is needed to eradicate vulnerabilities within the application layer, because hardware security alone won’t protect a system running a vulnerable application.

Firstly, you need to train your developers in how to write secure code. I’m sure your developers know the eye catching features of your development platform but there’s no excuse for them not to know the security features as well. Training staff doesn’t have to be as expensive as it may sound. There are in fact lots of excellent and freeware application forums and online tutorials available on the internet. One of the leaders in this field is the open-ware application security project which has loads of examples of code security.  Even when your developers are writing code with security in mind, you will still need to test it’s technical and logical vulnerabilities.

The whole process of evaluating and monitoring the security of an application needs to be moved into the development process, starting with threat analysis and then the dynamic analysis and static analysis of the code. Both of which I want to talk about in a little more detail.

Often within an organization those tasked with IT security do not have an in depth understanding of how the applications they are supposed to be protecting actually work. This tends to lead to overly defensive hardware solutions being put in place, which is why system security is so often seen as expensive and a hindrance, not a business benefit. Conversely, developers often don’t realize the security implications of particular features and functions that they wish to incorporate into their application. 

To resolve this problem of security versus usability, you need to use threat modeling. Threat modeling not only raises security awareness amongst developers, it makes application security an integral part of the application design and development processes. It is a great way to help teams to bridge the knowledge gap between security and development professionals. The end result is actually a reduction in the number of vulnerabilities that make it through to the released version.

And as the cost of addressing security issues increases as the software design life-cycle proceeds, threat modeling not only helps create better products increasing customer confidence in your applications, but it benefits the bottom line, too.

Let’s look at threat modeling in a little bit more detail. It is carried out during the application design stage and is the process of identifying and evaluating the risk to an application. This involves categorizing which assets or sensitive information the application accesses in order to identify potential threats to the application. By employing a data-flow approach whereby the threat modeling team map the flow of data through the application they can identify the key processes and the threats to those processes.

By having your security professionals and developers sit down together to analyze the application from an attacker’s standpoint, everyone will gain a better understanding of how and why the hacker may attack it and how the vulnerabilities can be removed.

The time to do this is once the user requirements for a new application have been gathered and work has started on the architecture and design of the application. This process not only ensures architecture design issues are resolved early on, but also creates a set of documents that identify and justify the security requirements of the application.

Countermeasures can then be implemented and tested to ensure the application doesn’t leave sensitive or personal information vulnerable to potential attackers. Relying just on perimeter security is not going to keep your applications secure. Using a threat modeling process will ensure that security is built into your applications from day one, increasing their resilience and reducing the support costs at the same time. This is a great tool for showing management how security can actually add business value.

Threat modeling is just the beginning of what needs to be an application security life-cycle. If development of your applications gets underway, you’ll need to initiate code review. Now there are basically two types of code review, static and dynamic. Static analysis involves reviewing an application’s source code without actually executing the application itself. This is often done using automated tools that analyze what the code does during every potential program execution. This allows the programmers to create diagrammatical and graphical representations of the code which give them a better understanding of the executed code’s effects.

It is then necessary to have experienced developers analyze the results and examine any suspect source code to remove any coding errors.

While program compilers often identify language rule violations, such as type violations and syntax errors, static analysis checks the source code for problems such as semantical errors that pass through compilers and result in problems such as buffer over-runs invalid pointer references and uninitialized variables and other vulnerabilities.

Another advantage of having code reviewed by experts is that developers will inevitably take security more seriously and also document their code more clearly so as not to be picked up by their peers.

However, some problems are difficult to foresee during static analysis.  Interaction of multiple functions can generate unanticipated errors which only really become apparent during component level integration, system integration or deployment. Therefore once the software is functionally complete, dynamic analysis should be performed.

Dynamic analysis reveals how the application behaves when executed and how it interacts with other processes and the operating system itself. While static analysis can find errors early in the development cycle, dynamic analysis tests the code in real life attack scenarios. 

Finding and fixing program errors can be time consuming, but it is worth it, believe me. In fact Gartner pegs the cost of removing security vulnerabilities during testing to be less than two percent of the cost of removing it from a production system. Even if your web applications are relatively secure when first deployed changes to the system infrastructural configuration and the advent of new threats mean that they won’t remain secure for too long.

It’s essential therefore that your security policies are regularly reviewed for relevance and effectiveness. I always find security policies work best when you state why the policy exists, what problem it solves, what needs to be done and who or what is responsible.

You should create your security policy by using the very regulations and requirements that govern your business communications, such as HIPAA, Fox, Visa or the FBI etc. and then make sure the policy is enforced. Don’t let your hard work and reputation be ruined by not carrying on with what you’ve started.

Ensuring compliance with your security policy is essential as it is this document that binds all your security defenses together, making sure that they are compliant and strengthen each other. You need to make sure that all employees fully understand the specific security related requirements that relate to their particular duties. Accountability for specific security tasks should be included in formal definitions of every job, so that they understand that security is not just a one time deal, but a policy that applies to everyone, all the time.

Security awareness training should emphasize to employees that security is everyone’s job. The effectiveness of security awareness training and whether security policies are in fact being implemented and followed, needs to be monitored so that you can show compliance with the rules and regulations that the organization  itself must follow.

What else can you do to improve your overall security of level seven? Well first, be prepared. You are going to be attacked, so make sure you develop an incident response plan. Once you’ve done that, you need to test it and rehearse it. You will then be able to handle an attack in an orderly effective manner, and minimize the impact on your network and its applications. If you do have an incident, once the situation has been dealt with, review how it was handled, a postmortem, if you like. The review should examine the who, what, how, when and why the incident in order to improve your processes and tools and training for the future so you’ll be better prepared for the next attack.

Not only can you learn from your own incidents, but you can also learn from those of others as well. The six most common problems to application layer attacks being successful are unvalidated input, improper error handling, poor password management, poorly configured and unpatched systems, weak auditing and monitoring processes and inadequately restricted access to critical information.

So, when you next carry out a security review and audit, check that your applications and systems are not vulnerable due to to any of the above vulnerabilities and weaknesses and that your security policy addresses each of these areas.

Finally, because application layer firewalls examine the entire network packet rather than just the network addresses and ports, it means that they have more extensive log in capabilities recording actual application specific commands. So don’t let this capability and information go to waste. I would certainly recommend that you employ log fire analysis because it can warn you of impending and actual attacks. Thank you very much indeed for listening.

Eric Parizo: All right. Great presentation Mike, thanks so much. All right Mike, are you ready for our Q and A?

Michael Cobb: I am indeed.

Eric Parizo: All right. Where would you place your six key devices when thinking about the overall network topology?

Michael Cobb: That, that’s a very good question and I did think about actually including a diagram within the presentation but there are just so many different permutations, depending on the value of the data and resources which you are trying to protect. And also, nowadays there are a lot of hybrid devices available. So for example, the modern routers actually have some firewall functionality and some firewalls can actually also act as routers. But I’d certainly start with a border router. The router is going to be the main access point from the outside to your internal network and the router’s packet filtering capabilities can be used to reduce the background noise. You don’t want to actually block too much here because otherwise you won’t get a complete view of denied packets in your firewall logs.

The firewall, that’s going to be inserted right behind the gateway router that connects to the internet. If you’re using a VPN concentrator, don’t place it in parallel with any other device offering security services. A VPN concentrator doesn’t actually offer stateful inspection or deep packet inspection. So VPN concentrators should be not placed in parallel ‘cuz it would otherwise bypass the security services of say your firewall. It needs to be put behind the firewall.

I would also certainly segment resources depending on their security needs, so that firewalls can be tuned specifically for the resources sitting behind them. So, for example, a more secure but slower application layer firewall could be used to protect an ecommerce website, while a fast stateful packet filter firewall could be used to protect planes[?] and other areas of the website. So really you need to look at what you’re trying to protect to help determine where you would place your devices. And if you are not sure, what I would do is employ sort of a network specialist to work with your network engineer to help you make the decisions of what devices you need and  where you need to put them.

Eric Parizo: With regard to load balancing, which would you recommend, a hardware based or a software based firewall?

Michael Cobb: Well, you always need to balance sort of security, throughput and cost in any risk management decision. Just quickly, software firewalls are installed on a computer and are usually cheaper and more flexible than a hardware firewall. These are sort of specialized hardware devices optimized to run firewall software. They’re typically easier to install and configure, partly because the operating system has already been hardened. If you are running a very big network that’s dealing with a lot of traffic, I would definitely go for a specialized hardware device. If it’s a smaller network you may just possibly expand quickly and sort of change the configuration of. This is often more easily done with a software solution. But I think my main advice would be don’t get sort of too hung up whether it’s hard or soft, the key thing is that it meets the objectives that you’ve set and that you have in-house skills to actually configure and manage it.

Eric Parizo: One more question for you here. As you know, network behavior analysis can slow down network traffic sometimes, do you think it’s worth incorporating that technology into a network defense?

Michael Cobb: I do. The application layer attacks are becoming sort of so much more sophisticated. As I said, you know, traditional network defenses aren’t really up to the job and I do think that network behavior analysis plays an important role on actually sort of sensing and alerting administrators when something unusual is actually happening. Even on smaller networks that sort of have smaller devices. Firewalls, routers and VPN devices, a lot of them now do provide logging and these can be fed into network behavior analysis devices, such as intrusion prevention and detection systems. You know, the important thing is to remember though that you need to take a baseline snapshot of your network against which you can compare any sort of suspicious traffic.

I saw a very good example of how network behavior analysis can play such an important role. AT&T take a regular snapshot and as a result they can tell when a new virus is about to be unleashed as they’re alerted by a small jump in traffic as the virus is being tested. So this shows sort of how important network behavior analysis can be and I do think that its an important part you know, for big and small networks alike. But you do need the resources to be able to carry it out and actually have the time and the tools to analyze the traffic that you’re logging.

Eric Parizo: All right, Mike.  I’ve got one more question for you. I know you touched on threat modeling a little bit. How much importance do you give threat modeling as the way to protect applications in layer seven. I’d imagine that it can be very helpful for companies to have the procedure down but in a way I wonder if it offers a false sense of security. What do you think?

Michael Cobb: For companies that don’t build their own applications it’s not something they’re going to have to do, but for anybody that sort of builds their own applications I think it’s an important part of the application development life cycle. Partly because it does end up saving so much money if you can sort of look at the application design requirements, see where you know possible attacks may be coming from and build security in from day one. It’s when you sort of build security in at the very end that it becomes so difficult. Security needs to be moved right to the beginning of the application build process and threat modeling is one way of doing that. And also I’ve seen it does help develop a far better understanding between the developers and the security professionals of what each side is trying to achieve, and by bringing those two sides together with better understanding, you can eliminate a lot of vulnerabilities early on. And it is, as I say, a great cost saving. Yeah, I think it’s always important for the security professionals to be able to show that he is actually sort of saving an organization money. So, I’m a big fan of it and I think anybody who’s serious about building their own applications, it’s a great way to start.

Eric Parizo: All right very good. Mike, thanks so much again for your insight today. 

We’d like to thank Michael Cobb, founder and managing director of Cobb Web Applications for joining us today. For more information, see Michael’s exclusive tip on layer seven firewalls and switches via the link on your screen and be sure to check out more resources from our integration of networking and security school in partnership with searchnetworking.com by visiting searchsecurity.com/netsec, that’s searchsecrity.com/netsec. And thanks to all of our listeners for joining us. Have a great day, stay safe out there.

                 

 

 

 

 

More on Securing Productivity Applications

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: