Cloud Security, Part 2
Key topics within Cloud Security:
Part 1 dealt with:
- Cloud security planning and design
- Governance and operations
- Multitenant security
Part 2 deals with:
- Security in an automation cloud environment
- Identity management and federation
- Data sovereignty and on-shore operations
- Cloud security standards and certifications
Part 3 will outline:
- Cloud security best practices
IS YOUR DATA MORE OR LESS SECURE IN THE CLOUD?
The basic concept of cloud computing is the transformation and consolidation of traditional compute services, data, and, applications to an on-demand, automated, elastic, often third-party hosted cloud. Customers and security industry experts often ask if their cloud environment is more or less secure than a traditional behind-the-firewall enterprise network. Not surprisingly, there are two polar-opposite opinions among industry experts on this:
Some industry experts claim that consolidating all customer data and servers into a cloud means that a successful hacker could have access to massive amounts of proprietary information. This would make the risk of data loss or tampering higher in cloud environments, compared to server farms and applications in traditional enterprise datacenters because the cloud is, in theory, a more attractive and lucrative target.
The majority of security industry experts agree that the cloud can be more secure than traditional enterprise IT operations.
A consolidated location for servers, applications, and data is easier to protect and focus security resources on than traditional enterprise systems—if it is done properly. Public cloud providers and any private cloud owner can procure all of the very latest in security appliances and software, centralize a focused team of security personnel, and consolidate all systems events, logs, and response activities.
Industry experts point to consolidated focus, and simpler correlation of logs and events as reasons for cloud environments being more secure than most legacy server farms. Predictable and consistent automated service provisioning of VMs, storage, operating systems (OSs), applications, and updates also improves configuration accuracy, rapid updates, and immediate triggering of security monitoring and scanning.
Cloud automation brings consistency and real-time updating of security and opera- tional systems, improving configuration control, monitoring, and overall security compared to legacy IT environments.
In a cloud environment, the security and mitigation systems are in place protecting the entire network. Cloud providers have sufficient capability and capacity to monitor and react to real or perceived security issues. However, because this is a multitenant environment, any individual customer with higher-than-normal security requirements might be more difficult and expensive to accommodate in a shared cloud model. This is probably the number one reason for private cloud deployments. A private cloud is dedicated to one customer (or group of departments) that have shared goals and, more important, a shared security posture.
Another concern with multitenant cloud security is transparency to the customer. The security and monitoring systems are designed to consolidate and correlate events across the entire environment with a skilled team of security operations personnel managing and responding. Presenting this event data to individual customers in a multitenant environment is difficult because the security software is often not designed to break out the consolidated and correlated data for each tenant.
4) Security in an Automated Cloud Environment
One of the major differentiators of a cloud environment versus a modern on- premises datacenter with virtualization is the automation of as many processes and service-provisioning tasks as possible. This automation extends into patching of software, distribution of software and OS updates, and creation of network zones.
Each of these automated provisioning processes presents a challenge to traditional security monitoring because the software and hardware environment is constantly changing with new users, new customers, new VMs, and new software instances.
A cloud requires equal attention to the automation of asset and operational management; as new systems are automatically provisioned, so to must the security and operational systems learn about the new items in a real- time fashion so that scanning and monitoring of these assets can be initiated immediately.
The first rule for any cloud is to automate “everything.” You should plan and design a cloud system with as few manual processes as possible. Manual processes are inherently less consistent and inhibit the ability for rapid provisioning of new services or expanded capacity on demand, which is fundamental to a cloud. So, a core security concept—and this might be contrary to ingrained principles of the past—is to avoid any security processes or policies that delay or prevent automation.
The relentless pursuit of automation brings operational efficiency, consistent configurations, rapid provisioning on demand, elastic scale up and scale down, and support cost savings. This pursuit of all things automated also improves security. The new style of cloud security is to assess and pre-certify all cloud services, applications, VM templates, operations system builds, and so on.
You should adopt the theme “relentless pursuit of automation.” Eliminate any legacy security processes that inhibit rapid provisioning and automation.
As soon as new systems (VMs, applications, etc.) are brought online and added to the asset and configuration management databases, the security management systems should immediately be triggered to launch any system scans and start routine monitoring.
There should be little or no delay between provisioning a new system or application in the cloud and beginning security scans and continuous monitoring.
In a dynamically changing and automated cloud, continuous monitoring should be combined with continuous updating of asset and configuration databases. This real-time updating of assets and configuration changes will be fed into the security systems whenever new servers, VMs, and applications are launched and need to be scanned and monitored.
Without automated updating of asset, configuration, and monitoring systems in real time as cloud services are being provisioned and de-provisioned, it would be almost impossible to keep up (manually or otherwise) with all of the changes to VMs, virtual LANs (VLANs), IP addresses, applications, and so on.
PRECERTIFICATION OF VM TEMPLATES
Organizations with strict security accreditation processes often struggle with the idea that cloud services should immediately provision new VMs when ordered. IT security teams should pre-certify all “gold images” or templates that can be launched within new physical devices or VMs. These templates might include a combination of an OS, applications, patches, and agents for network or security monitoring.
As soon as the VM is ordered and provisioned by the cloud automation system, the VM or gold image template is copied to a new VM and then booted. Because the security teams have already approved the templates, each new VM that is based on the pre-certified template should also be considered approved and in compliance.
Of course, any future changes to the applications or VM might need to go through additional change control and security scrutiny. One of the best ways to control software application deployment and security management is to create and certify automated application installation packages that can be deployed in combination with VM templates
Certification of gold images is not just an initial step when using or deploying a new cloud. Many organizations and customers will request that existing or future gold images—homegrown or commercial off-the-shelf (COTS) applications and configurations—be loaded and added to the cloud service catalog. Security experts should perform scanning and assessments of every new or modified gold image before loading it into the cloud management platform and giving customers the ability to order it.
Using a combination of VM templates and smaller application installation packages—all pre-certified by security—will reduce the frequency of having to update the master VM gold image. Also realize that when a new gold image is accepted and added to the cloud, the cloud operational personnel (depending on contract scope) might now be responsible for all future patches, upgrades, and support of the template. Many cloud providers charge a fee to assess and import customer VMs or gold images. Customers might push back on this extra cost, so you should take the time to explain the need for these manually intensive assessments and the ongoing upgrades and support required.
PRECERTIFICATION OF NETWORK ZONES AND SEGMENTATION
Most cloud services, such as a VM or applications running on a VM, require a default network or virtual network connection as part of the automated configuration. You can configure VMs with multiple virtual network interfaces and connect to one or more production or nonproduction network segments with your datacenter. These network configurations include the VM configuration when your security team performs its precertification.
You might want to offer additional network segmentation as an option through the use of virtual firewalls and VLANs to secure or isolate networks. Applications that need to be Internet-facing should be further segmented and firewalled from the rest of the production cloud VMs and applications. Platform as a Service (PaaS) offerings are very often configured with multiple tiers of VMs and applications that interact and can have several network zones to protect web- facing frontend servers from middleware and backend databases that all form the enterprise application.
Pre certify all production and nonproduction network segments so that VMs can be provisioned automatically without manual security approval processes. Also con- sider preapproving a pool of optional virtual networks that can also be provisioned automatically upon a customer order.
Security pre-certification also extends to all applications and future updates that will be available on the cloud. You should configure applications as automated installation packages where any combination of application packages can be ordered and provisioned and then installed on top of a VM gold image.
By separating the VM gold image from application installation packages, you can reduce the number of VM image variations and frequency in updating VM images (compared to fully configured VM images that include applications). Additional packages for upgrades and patching of the OS and apps will also be deployed in an automated fashion to ensure efficiency, consistency, and configuration management.
Use a combination of security-approved VM templates and application installation packages. Reduce the quantity of VM image variations and frequency of updates by separating the OS image from the applications.
More complex multitiered applications (e.g., multitiered PaaS applications) will require significantly more security assessment and also involvement in the initial application design process. If security experts are not involved with the initial multitiered application design, trying to map multiple production-ready application tiers to the automated and pre-certified network segments or VLANs can be a nightmare.
ASSET AND CONFIGURATION MANAGEMENT
Many organizations have a mature asset and configuration-management system in place. In a private cloud environment that uses automated provisioning, the key to success is to also automate the up-dating of asset and configuration databases.
This means that you configure the cloud management platform, which controls and initiates automation, to immediately log the new VM, application, or software upgrades into the asset/configuration database(s). Because this is done through automation, there is little chance that updating the asset or configuration databases is skipped and the accuracy of the data will be improved when compared to legacy manual update procedures.
The overall goal is to have all inventory, monitoring, and security systems updated in real time so that network, security, and operations teams are continuously monitoring the current state of the environment and all its assets and continuously changing configurations.
Automated configuration changes, which are based on preapproved packages or configurations, should be marked as “automatically approved” in the change control log, fulfilling the purpose of a change control log as an auditing tool. In this case, the change log entries are automatically entered, but there will likely be other more significant infrastructure configuration changes throughout the cloud that can and should still follow the manual change control board process.
5) Identity Management and Federation
User identity management, synchronization of directory services, and federation across multiple networks is very unique in a cloud environment compared to traditional enterprise IT environment.
With the single sign-on (SSO) model, applications or other resources in the cloud use the same logon information that an end user provides when she logs on to her computer, precluding the need to prompt the user for any additional logon information. This is done through a variety of techniques depending on the desktop OS, the applications, the network infrastructure, and possibly third-party software that specifically enables this functionality.
Within a traditional local area network (LAN) hosted by an enterprise organization in its own facility, having a single network authentication system, such as Microsoft Active Directory, is not very difficult; in fact, it is a built-in feature of the Microsoft Windows Server OS. LDAP is a more universal industry standard for user directory services and authentication that is not specific to any software manufacturer. Security Assertion Markup Language (SAML) is an even better solution for cloud environments when SSO and federation are used. The challenge is when users access data on multiple server farms, across wide area networks (WANs), and on multiple applications created by different software manufacturers. As you implement cloud services, this becomes even more complex.
A cloud service provider can only do so much to enable SSO from their facilities. There are cloud providers that implement third-party software solutions that broker authentication to downstream applications and networks. This requires each cloud customer and application to integrate with the centralized authentication system that the cloud provider has chosen.
There are numerous identity and authentication systems available in the industry that either the cloud provider might have available for customer use, or a customer can deploy its own within its VM. So, there is no one answer to implementing SSO; however, LDAP and SAML are the primary industry standards. All applications and OSs that you want to integrate with the cloud or migrate to the cloud should support one or both of these protocols.
One area related to SSO and identity management is federation, also called Federated Identity Management (FIM). Federation is when you connect multiple clouds, applications, or organizations to one or more other parties. The list of users and authentication details are shared securely across all parties of the federation. This makes possible features such as allowing one organization to see another organization in a Global Address List (GAL), or sending an instant message to another person across organizations.
The federation software creates and maintains a bridge between the disparate networks and applications, effectively synchronizing and/or sharing user lists between one or more organizations. In a cloud environment with distributed applications, data, and users located potentially all over the world, federation and SSO is what makes this seamless experience possible. Your average daily tasks performed in the cloud might actually involve logging on to a dozen applications, databases, networks, and cloud providers, but all of this is transparent to you due to federation and SSO.
Figure. An overview of Federated Identity Management
6) Customer Accreditation of Cloud Services
It is difficult—if not impossible—to get a public cloud provider to give an individual customer access to the provider’s network and allow customer IT security staff to perform an accreditation. In fact, to show customers what was happening inside the networks can be considered paramount to showing customers—and potentially competitors—your intellectual property, with too much visibility into the internal security systems and procedures.
Although most public cloud providers rarely allow individual customer inspection and accreditation, providers have in some cases allowed a third-party assessment so that the public cloud provider can sell its services to government and other large customers with requirements for an official security accreditation. The U.S. government’s FedRAMP accreditation process, which uses third-party assessment vendors, is an excellent example of this approach.
A private cloud deployment is much more accommodating and suitable for customer accessibility and a security accreditation process. The security standards and accreditation process are the same or very similar for a public cloud, with any multitenant cloud getting the highest level of scrutiny for security controls and customer data isolation.
As part of planning your organization’s transition to cloud, you need a complete understanding of the cloud models, the security standards that you need to follow, and the personnel who will perform the security accreditation. When procuring a public cloud service, your evaluation criteria should include the designed security accreditation.
For a private cloud deployment, ensure that your organization or the systems integrators that does the deployment is capable and experienced in highly secure cloud computing and already has security accreditation experience. Finally, remember that security accreditations normally require annual reassessments and certification renewals (or perhaps on some other time interval).
As most public and private clouds mature and add new capabilities over time, these periodic accreditations are not just a quick “rubber stamp” process but involve assessing the entire system again with particular attention to the new services or configuration changes.
7) Data Sovereignty and On-Shore Support Operations
Data sovereignty refers to where your data is actually stored geographically in the cloud—whether it is stored in one or more datacenters hosted by your own organization or by a public cloud provider. Due to differing laws in each country, sometimes the data held by the cloud provider can be obtained by the government in whose jurisdiction the data is stored, or perhaps by the government of the country where the data provider is based, or even by foreign governments through international cooperation laws.
Further government monitoring or snooping (some governments tend to change laws or push the bounds of legality to serve their own purposes) on behalf of crime prevention agencies has also become a concern.
Not everything here is doom and gloom. There are “safe harbor” agreements between key governments such as the United States and the European Union to better enforce data privacy and clarify specific scenarios and data types that can legally be turned over by a cloud provider upon official requests. Organizations using public cloud services should examine the policies and practices of a prospective cloud provider to answer the following questions:
- Where will data, metadata, transaction history, personally identifiable data, and billing data be stored?
- Where will backups or replicated data for disaster recovery be located? What is the retention policy for legacy data and backups? How is retired data media securely disposed?
- Who and where support personnel are located and to what do they have access? How are their personal background checks performed?
- Where is the provider’s primary headquarters, location of incorporation, and under which laws and jurisdictions do they fall? How does the provider respond to in-country or foreign government requests for data discovery?
- Is the government authority or third party obligated to notify you that it has taken possession of your data?
Data sovereignty and data residency has become a more significant challenge and decision point than most organizations and cloud service providers originally anticipated. Initially, one of the selling points of the cloud that a cloud service provider would point out was that you, as the customer, didn’t need to be concerned with where and how it stored your information—there was an SLA to protect you. Lessons learned are to now ask or contractually force your cloud provider to store your data in the countries or datacenter locations that fit your data sovereignty requirements.
Also consider if you require that all operational support personnel at the cloud provider be located within your desired country and be local citizens (preferably with background checks performed regularly)— this in combination with data sovereignty will help to ensure that your data remains private and is not unnecessarily exposed to foreign governments or other parties with whom you did not intend to share it.
You should request that data be stored in the country of your choosing to maintain your data privacy rights. Many public cloud providers now offer these options and this is definitely a consideration for building your own private or hybrid cloud environment.
If you are a private cloud operator, you should not only have published policies to address these concerns, but also consider formal written internal policies, such as the following:
- All staff must know the policies with regard to when and if to respond to government and other requests for data release.
- Staff must be fully versed in all data retention policies and procedures for data retirement.
- There must be a clearly articulated policy for cloud data locations, replications, and even temporary data restorations or replications in order to maintain data sovereignty for customers with such requirements and con- tracts.
- An internal policy review committee must be established as well as a channel into corporate legal department for handling each official data request and overall policy governance.
- A documented plan should be in place for how to handle document requests and other legal events that might occur—be specific with respect to law and government identities and how each will be handled.
End of Part 2.