Trilogix Cloud

Migrating Applications to the Cloud – a summary of how to do it

Every modernized data-centre or cloud will provide, at a minimum, the basic VM infrastructure, storage, and network services. When you transform mission-critical applications for use in the cloud, your applications can avail themselves of the unique benefits that the cloud offers. Purchasing or deploying a basic IaaS- focused cloud service is just an initial step in an overall enterprise IT transition to cloud. It is the application porting or redevelopment that will become the long-term path to complete the transition to cloud.

Key Take-Away

You must assess each application to determine whether a simple porting is possible or if the application will require a complete redesign to migrate to the cloud.


There are four types of application modernization strategies to migrate legacy applications in the cloud (figure below). You first need to carry out a careful analysis of each legacy application to determine the best method to maximize long-term benefits, taking into account time, costs, and user impact. Some legacy applications are mission critical and unique to your business such that the long- term effort to redesign and recode them is worth the time and cost. Then, there will be relatively simple applications that you can quickly port (i.e., rehost or refactor) to a cloud platform, or even eliminate and replace with a SaaS offering from a cloud provider.


Figure: Application modernization strategies\


Rehosting (or porting) of applications is essentially a “copy and reinstall” technique to take relatively simple legacy applications and host them in the cloud. These applications typically include one or more VMs and, possibly, a database that includes traditional applications which were not originally designed for the cloud. In a perfect scenario, a physical-to-virtual (P2V) migration is possible. Testing and minor configuration changes are often required, but the overall effort to port a single application is usually measured in only days or weeks. Although you might be able to rehost on the cloud quickly and without modifying the application architecture, many of the cloud characteristics (e.g., scalability, resiliency, and elasticity), might not be fully realized.



Refactoring an application is similar to rehosting, but some components of the application are enhanced or changed. You might break up the application into components such as frontend web servers and backend databases so that you can cluster each tier and load balance to provide higher availability, elasticity, and handle more traffic and workload. In this scenario, the core application itself needs little or no reprogramming, because the cloud infrastructure and hypervisors provide some of the scalability without the application having any of the “awareness” that a cloud-native app would.



If there are legacy applications that you cannot rehost or refactor (because doing so would not produce the desired performance, quality, or ROI), you should consider redesigning. These applications are often the oldest and most complex legacy applications, possibly mainframe-based, that might require multi-layered platforms, databases, middleware, and frontend application servers to be deployed in the cloud. More complex legacy software might require significant assessment and planning just to come up with a project plan and budget to determine the feasibility, risk, and business decision to proceed with the reprogramming. The long-term benefits of a new, properly designed cloud-native application include dynamic elasticity, resiliency, distributed processing, high availability, faster performance, and lower long-term code maintenance.



Purchasing services from an existing SaaS cloud provider and retiring the legacy application is often a fast and effective cloud-transition strategy. If the ROI to transition a legacy application to the cloud is poor, the organization should consider if the application is truly mission critical or necessary, and whether you could use a COTS or SaaS cloud provider, instead. Technically, you can consider an application hosted in a private cloud as SaaS, but most SaaS applications in the industry are hosted with public cloud providers. Excellent examples of this are public cloud email providers such as Microsoft Office 365 and GoogleApps, or software applications. Sometimes these SaaS applications provide more features than your legacy software, but they also might not be as configurable as you are accustomed to because they are normally hosted on a shared public cloud infrastructure.



Regardless of which application modernization strategy you use, organizations must also consider operational factors. For example, applications that have been ported to a public cloud might still need database maintenance, code updates, or other routine administrative support. Applications that are hosted as a PaaS or SaaS offering might need less support because the cloud provider will have responsibilities for updating and patching OSs and software components. Of course, in a private-enterprise cloud, your staff (or hired contractor) continue to provide this support but hopefully in a more automated manner than a traditional data-centre. Consider your current staffing levels, outsourced support, and overall IT organizational structure. Most organizations that have already deployed a private cloud or use some public cloud have not reduced their IT staffing levels; however, they have changed the skillsets and team structures to better accommodate a more service-oriented model that is best suited to support a cloud ecosystem.


Application monitoring

When it comes to mission-critical applications that are core to your organization’s customers and livelihood, you might keep these applications hosted within a private enterprise cloud or a secure public provider. In either situation, you should still be concerned with monitoring the performance and user experience (UX). The private or public cloud management tools will provide some level of VM, and maybe some limited application-level, utilization monitoring but this is usually not adequate for truly mission-critical applications (they’re likely OK for normal business productivity systems). So, regardless of where your mission- critical apps are hosted—public or private cloud—you should still use your own application monitoring tools and techniques that include synthetic transactions, event logging, utilization threshold alerts, and more advanced UX simulated logon tools.


Service levels

Consider the service-level agreements (SLAs) for applications hosted in the cloud. Many public cloud providers provide a default level of service guarantee and support that is insufficient for mission-critical applications. In some cases, the public cloud provider does not even guarantee that it will back up, provide credit, or be liable for data loss. Be careful how cloud providers word their SLAs because they might only guarantee network availability in their uptime calculations instead of PaaS or SaaS platform service levels. Other vendors claim extensive routine maintenance windows (in other words, potential outages) that are also excluded from their SLA.


Federated authentication

Consider user authentication and access controls for cloud-hosted applications. You might want to federate an enterprise user directory and authentication system (e.g., Microsoft Active Directory or LDAP) to the cloud for an always up-to- date and consistent user logon experience. A preferred method is to use a vendor-agnostic industry standard for authentication, such as SAML, especially when federation and SSO is required.  You can also integrate an iDaaS (identity as a service) such as OKTA or Centrify, with your LDAP or existing AD, to achieve SSO.



When migrating applications to IaaS or PaaS-based cloud services, you might gain scalability features that were not easy, cheap, or available in the legacy enter- prise environment.


Scale out

Depending on the type of application modernization undertaken for a given application, the cloud-based system might now be able to take advantage of dynamic or automated scale out of additional VMs—technically, scale out is called elasticity. It is preferable that the application be cloud native or cloud enabled so that it is capable of detecting peak utilization and triggering scale out automatically. For legacy applications moved to the cloud, you can use the hypervisor and cloud infrastructure to measure utilization with defined thresholds that will trigger scale out, even though the application is unaware of these events. Scaling down after peak utilization subsides is just as important as scaling out. Again, cloud-native applications, that handle this automatically are more efficient and faster to react than legacy applications that rely on the hypervisor to scale.


Scale up

Scaling up an application refers to increasing the size of a server, or more common in cloud computing, a VM with more memory and processors to handle increasing workload. Whereas a scaling-out involves launching new VMs to handle peak utilization, scale up involves enlarging the configuration of the same physical server or VM(s) running your applications (up to the maximum number of processors and memory capacity for that particular physical server or VM).

Scale up is considered a legacy technique for scaling. Scale up does not provide cloud-level resiliency or redundancy and is not as efficient compared to scale out.  Scaling up, or vertical scaling is basically, “just buy a bigger server” rather than smaller more purpose-built servers and services in a scale-out cloud configuration. Another downside of scale up is that you often need to reboot the VMs to recognize the new processor and memory configuration. However, the need for this additional step will likely recede because some hypervisor platforms are beginning to support dynamic flexing of additional processors and memory.

Finally, consider scalability of your applications in terms of geographic access and performance. This refers to load balancing and/or hosting applications in multiple geographic locations to maximize performance and reduce network latency (inherent delays in the communications path over the network). You might want to deploy your application in multiple cloud data-centres on opposite sides of the country in which you reside, or in different regions of the world so that end users of your applications are automatically routed to the closest and fastest data-centre. Be aware, however, that many cloud providers charge additional fees for data replication, geo-redundancy, band-width, scale up/scale out, and load-balancing capabilities. For an enterprise private cloud, these geo-redundant communication circuits are often cost prohibitive.

Leave a Comment

Your email address will not be published. Required fields are marked *