Cloud computing promises many benefits: It can reduce IT costs and downtime while vastly increasing storage, mobility...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
and provisioning options. But, it's also a potential security nightmare: perimeters disappear, clients and servers move around at will, and old models of access control, authentication and auditing no longer apply.
All these challenges can be met, but any cloud migration requires careful planning. Cloud computing fundamentally changes long-standing best practices in network design, encryption and data loss prevention, access control, authentication, and auditing and regulatory compliance. To prepare their network for the cloud, organizations need to take stock of their infrastructure and adjust their practices and processes accordingly.
START WITH THE PLUMBING
A common misperception about cloud computing is that moving services to an off-site provider will reduce bandwidth requirements. In fact, the reverse is often true: Cloud computing can increase bandwidth requirements due to increased Internet connectivity. A move to cloud computing also has implications for virtualization and the suitability of existing security infrastructure and security policies.
To understand how cloud computing can radically shift network and security requirements, consider a common hub-and-spoke network design (see Figure 1). Here, branch offices connect with one or more enterprise data centers where key applications reside. There's a well-defined perimeter to the public Internet, and the bandwidth, latency and packet loss characteristics between sites are easy to measure.
In contrast, cloud computing involves Internet connectivity for every site in the enterprise (see Figure 2). Here, given that applications now reside in the cloud, there is no clearly defined perimeter. Further, the traffic characteristics of every site's Internet connection may affect application performance. As a result, some organizations find a cloud migration results in increased requirements for bandwidth and security monitoring.
Beyond basic network characteristics, there's also the question of what kind of traffic leaves the enterprise as it moves to cloud computing. Understanding what kind of traffic you have is just as important as knowing how much traffic you have. If network flow analysis -- which uses existing flow-reporting tools in routers and some switches to provide an in-depth view of application traffic -- isn't already deployed, this would be an excellent time to consider implementing it.
To be fair, this is an extreme, strawman example of cloud network design. Hybrid designs are more likely, with branch-office Internet connectivity still channeled through one or more internal data centers. Even so, cloud computing means key applications are reached via new connections outside the enterprise. Testing the network characteristics of these new connections is critical.
NETWORKING AND VIRTUALIZATION
Virtualization is a key enabling technology for cloud computing and data center consolidation. Well before moving to the cloud, many enterprises adopted virtual servers as a means of saving on hardware, increasing uptime or both. For these organizations, migrating a virtual infrastructure to the cloud could have a significant impact on application performance.
Consider vMotion from VMware, the leading virtualization vendor, which moves virtual machines (VMs) between host servers with virtually no downtime perceived by users or applications. This is truly the "killer app" for virtualization; network managers like vMotion because it's such an easy, hitless way to move VMs around.
For all its benefits, though, implementing vMotion into the cloud can affect application performance. First, there's the issue of bandwidth: vMotion requires lots of it, and assumes a high-throughput, low-latency network. It's possible to use vMotion to move VMs across slower wide-area network links, but not with its zero-downtime benefit. This could be an issue when using vMotion between an enterprise staging site and the cloud provider, or even within the cloud provider's network if that encompasses multiple physical sites. Either way, if network managers want to avoid VM downtime, ensuring close proximity of VMware hosts is a must.
Second, vMotion generally requires source and destination VMware host servers to reside within the same layer-2 network (that is, within the same broadcast domain). This isn't a problem in large data centers, which deliberately create very large broadcast domains to accommodate virtualization. However, it could be an issue in moving VMs across different IP subnets, for example, between an enterprise and the cloud provider. Suitability for vMotion should be a part of any network design review. The same caveats apply for vApps, which does for applications what vMotion does for VMs.
IMPACT ON SECURITY DEVICES
If Internet traffic increases with cloud computing, then so too will the load on security devices such as firewalls, VPN concentrators and IDS/IPS appliances. This has implications both for pure performance and for security policy. The performance piece is simple: Increased Internet connectivity means a heavier workload for security devices. It's great to upgrade to, say, a 100-Mbit/s Internet connection as part of the move to cloud computing, but if existing security devices are rated only to 10 Mbit/s, they will quickly become a bottleneck.
Depending on security policy, a move to the cloud may require enabling additional IDS/IPS signatures, and this too can have a negative performance impact. Network Test has conducted multiple performance assessments of multifunction security devices where forwarding rates drop by a factor of 20x or more when IDS/IPS signatures are enabled. VPN devices such as IPSec or SSL concentrators also can degrade throughput and increase latency.
Other policy issues to consider include interoperability and changes to existing firewall rule sets. Cloud providers have their own security devices, but long experience with IPSec and SSL VPN troubleshooting suggests interoperability isn't a given. Even though both IPSec and SSL are based on open standards and may work flawlessly inside a multivendor enterprise network, there's no guarantee of interoperability with a cloud provider's equipment. Similarly, firewall and IDS/IPS rule sets will change as enterprises move more applications into the cloud, possibly affecting other parts of the firewall rule set in unexpected ways.
mTesting can help validate a move to the cloud, provided it's done with a meaningful workload. When it comes to performance measurement, some security appliance vendors perform tests using overly simple workloads. It's possible, for example, to test a firewall the same way as an Ethernet switch, and then only with large packets. However, this isn't a very stressful load; it will produce impressive numbers for a data sheet, but it's not representative of enterprise traffic.
A better practice is to model the particular mix of applications that will reside in the cloud, paying particular attention to transaction sizes, transaction durations, concurrent connection counts, overall bandwidth utilization and network characteristics, such as latency, jitter and packet loss. With these key metrics in hand, it's possible to craft a synthetic workload that will yield meaningful predictions about security device performance for a given enterprise.
ENCRYPTION AND DLP IMPLICATIONS
As noted, cloud computing changes or eliminates the concept of a perimeter, and that has profound implications for encryption and data loss prevention (DLP).
Prior to cloud computing, network managers were mainly concerned with a single set of encryption endpoints between customers and Internet-facing servers (see Figure 3). That changes with cloud computing, where there are now three sets of encryption endpoints to consider: (1) from customer to Internet, (2) within the cloud, and (3) from cloud to enterprise (see Figure 4). Encryption within the cloud may be necessary for regulatory compliance, or because a cloud provider's network may span multiple physical locations.
There's no one right approach for cloud model encryption. The simplest approach of encrypting everything from end to end sounds appealing, but also has the unintended consequence of "blinding" some key security and network management tools, such as application-aware firewalls and deep-packet inspection devices. Encryption everywhere also can complicate DLP, where the imperative is to maintain visibility of where data is sent and stored.
As usual, policy is the right place to begin in redesigning encryption and DLP for the cloud. At a minimum, a cloud-aware security policy should specify that traffic never leaves the enterprise unencrypted. Security policies should be revised to add requirements for detection of any breach of the encryption policy, including within the cloud provider's network.
Similarly, a cloud migration is an ideal time to review policy as to permitted protocols. A revised policy should banish, once and for all, insecure protocols such as FTP that allow cleartext transmission of passwords and other sensitive data. At the same time, policy also should specify which users can employ protocols that might leak data over encrypted protocols such as SSH and Secure Copy (SCP).
A redesigned DLP infrastructure can help solve some encryption problems by automating many processes. For example, DLP systems can automatically encrypt files attached to email and monitor traffic for files sent outside the enterprise using email or instant messaging. File-level encryption is also an option.
One final question to consider is whether the existing encryption and DLP infrastructure is adequate for cloud computing. Even if an upgrade to encryption and DLP is deemed unnecessary, network managers should consider how to implement these services within the cloud: as VM versions of existing appliances, as hardware devices between VMs and the network, or some combination of these.
A DIFFERENT ACCESS CONTROL MODEL
Cloud computing also changes long-standing concepts about access control. Historically, enterprises have used IP-centric access control models, where rules were based on criteria such as source and destination subnet addresses. That doesn't make much sense in a cloud context, where users can connect from anywhere, on any device, and where servers may be cloned or move around within the cloud.
Cloud computing changes access control from an IP-based to a user-based model. Essentially, cloud computing adopts the network access control (NAC) credo that who you are governs what resources you can reach. Because both clients and servers can be mobile in cloud computing, a dynamic approach to security policy is needed. Access control in the cloud should follow the NAC model of applying rules dynamically, in real time, as endpoints appear on the network. This approach is equally valid for clients and servers.
Of course, user-based access control supplements, but does not replace, the old IP-centric rules. Any sound migration strategy should include a review of existing access control lists (ACLs) on enterprise routers. It may make sense to rewrite and tighten ACLs so inbound traffic for key applications comes from, and only from, the cloud provider. Similarly, new rules may be necessary to enable users to reach newly migrated applications in the cloud.
Cloud computing stretches authentication requirements, both figuratively and literally. Anywhere, anytime client connectivity may require new, stronger forms of authentication. At the same time, the move to place services in the cloud extends the trust domain enterprises need to protect. For both clients and services, strong control over password and key management is a must, as is better break-in detection.
With cloud computing, clients no longer cross a single, well-defined security perimeter before being granted access to enterprise resources. Clients also may connect to these resources from shared public networks such as Wi-Fi hotspots, increasing the risk of password interception. A move to two-factor authentication -- for example, tokens plus some biometric mechanism -- makes sense to ensure clients are properly authenticated. Some well-known public cloud services, such as Google Apps, also support passwords plus tokens for authentication.
Password synchronization is also important. Maintaining separate sets of user accounts and passwords, one apiece for resources in the cloud and in the enterprise, is not a sound practice. Besides the added administrative overhead, two sets of accounts also inconvenience users and double the likelihood they will write down one or both passwords and save them in public view. A single sign-on system covering both enterprise and cloud-based user accounts can help here.
There's also an imperative to protect authentication mechanisms in the cloud, including both passwords and API keys. Many cloud services make use of representation state transfer (REST) Web services, which in turn use API secret keys for authentication. This raises a couple potential risks. First, REST security can be poorly implemented. For example, a security researcher has demonstrated how a major hosting provider transmits the secret key in plaintext as part of an authentication request. Although the request must be made over SSL, any compromise of either side of the SSL tunnel would also result in loss of the secret key.
Second, even in a well-designed system, the API key represents an extremely valuable resource, with serious consequences if it's lost. For example, enterprises on Google Mail identify themselves to Google's servers using an API key associated with the entire enterprise, not individual users. If this secret key were stolen, an attacker could impersonate any email account or share any Google document associated with the enterprise. Sound practices to protect the API key include encryption and a software audit to review API usage.
A review of IDS/IPS and DLP configurations also is in order. If signatures to detect cleartext transmission of passwords aren't already in place -- for example, in IMAP and POP email -- they should be added.
At least initially, cloud computing complicates the security auditor's job, since the systems and processes to be audited will be much more widely distributed. And there will be regulatory considerations when it comes to moving sensitive data to and from the cloud.
Logging and monitoring is critical in the cloud, but it's also more complicated, with large cloud providers' networks spanning multiple continents. While this has the advantage of moving content closer to users, it complicates timestamp synchronization between server logs. Without rigorous time synchronization among servers, troubleshooting becomes very difficult. Setting all system clocks in a single time zone, such as coordinated universal time (UTC), also is essential for taking the guesswork out of distributed log analysis.
A move to the cloud may increase the number of servers involved, especially where virtualization's cloning features are used, and this in turn increases the volume of logs to be analyzed. Network managers may want to consider implementing a unified log analysis system to collect and synthesize data from all the new sources.
Various regulatory regimes require data sanitizing as data moves to and from the cloud. This is similar to the encryption issues previously discussed, where cleartext transmission might be acceptable within a secure data center, but is never permitted across a public network. The Payment Card Industry Data Security Standard (PCI DSS) specifications for credit card handling offer a well-known example of data sanitizing. Among other things, these specifications require credit card data to be encrypted, obfuscated or deleted before storage.
Cloud providers must be PCI-compliant to handle such data, and also must have auditing measures in place to maintain that compliance. To mitigate risk, enterprises also should require insurance coverage on the cloud provider's part in the event of a data breach in the cloud, and build such coverage into any service contract.
In some cases, enterprises may have more rigorous compliance requirements than a cloud provider can meet. This isn't necessarily a dealbreaker for a given cloud provider, but it may require the enterprise to implement its own compliance framework within the cloud.
Cloud computing's benefits are real: a lower IT profile, faster provisioning and global availability of new services. At the same time, network managers need to think carefully before making the transition. Every challenge discussed here can be resolved, but each will require careful planning before and during the move to the cloud.
David Newman is president of Network Test, an independent test lab and engineering services consultancy based in Westlake Village, CA. He is the author of IETF RFCs on firewall performance measurement and many articles on network device performance and security. Send comments on this article to firstname.lastname@example.org.