With Cloud computing, and inside VLEs and the IT media industry, there is a groupthink that all technological innovation is now ‘Digital Transformation’. This is of course a lot of nonsense. Cloud computing covers Infrastructure, Platforms, Software as a Service, Security, Application re-platforming, Data Layer improvements, Single-Sign-On and an almost unlimited level of technical opportunities which may, or may not be, related to Digital Transformation. Putting technology under one rubric is pointless. Digital Transformation used to mean, and it still should mean, the following:
DT is the application of digital capabilities to processes, products, and assets to improve efficiency, enhance customer value, manage risk, and uncover new monetization opportunities.
Some applications and systems are indeed Digitally relevant. Many are not. Most workloads in most VLEs are not customer facing revenue engines, but work-flow systems, managing processes, data and functional requirements. They may, or may not, feed into a Digital system. Not every company is a software company and most firms are not Netflix.
Given the reality that not everything is ‘Digital’ needing ‘DevSecOps’, or ‘AI Automation’ or other buzzwords, there is however, a market opportunity to help firms move from Legacy Applications to Public Cloud platforms which can be troublesome and cumbersome. An approach would be to use automation in the use of Containers (or VMs) and apply a ‘factory’ approach to migrating (not necessarily transforming), applications to the Cloud. Public Clouds do have tools in place to aid with this, but often they are not built-to-scale, when many applications or ‘heavier’ systems are in play.
The IT media chatters that most companies have adopted a bimodal IT approach. In this approach, the traditional (“mode 1”) waterfall / legacy approach to IT remains unchanged, while the adoption of emerging digital technology is pursued following new exploratory / agile approaches. The assumption – which is wrong – is that every firm is a software company.
Agile is a process of IT delivery in which the business is integrated with IT subject matter experts (SMEs) on a product development or a project (both are valid). Agile can be used in development, migrations, refactoring, or even in planning new marketing products (IT SMEs would obviously not be involved with this). There exists a legion of challenges with Agile (cultural, organisational, budgeting, skills, HR, change management, offshore delivery, etc). Within IT projects if we accept the ‘bi modal’ view of technology, there are some very sharp challenges in integrating ‘modes’ 1 and 2:
- Lack of integration points in legacy applications: Apps often lack relevance as key data and services form the legacy stack cannot be accessed.
- Complexity of legacy technology: Additional requirements from digitalisation projects further increase the complexity of the legacy technology. An example is the need (whether relevant or not), to ‘de-couple’ and ‘de-compose’ applications into micro-services (yet another buzzword).
- Slow time-to-market of legacy applications: Added complexity -oftentimes unnecessary – results in an even slower time-to-market. A re-architecting of legacy applications is often not in the focus / budgets of innovation initiatives, or is deemed too expensive, time-consuming to engage in.
- Methodology and tool mismatch: Agile development and native cloud tool-chains often ignore the realities of legacy applications and time-to-market for business critical functions does not improve.
Maybe by using cloud and devops technology, some firms with highly skilled experts can overcome these issues and begin to engage in ‘digital transformations’ while at the same time increasing the productivity of the IT organisation as a whole.
Gradual deployment into the Cloud(s)
There is some veracity in the complaint that while “lift and re-platform” approaches from in-house data centres to Public Clouds will succeed, the benefits in terms of scalability, flexibility and time-to-market were way below “native” cloud applications. Keep in mind that ‘native cloud’ entails a re-architecting, re-build, the use (most likely) of micro-services and Containers and is complex and budget-time consuming. Not every application must be Cloud-Native (another ridiculous myth). But for those applications which do face an end consumer, in a competitive market, where application intelligence can be used (eg. Automated loan approvals), then Cloud-Native is probably the only real option.
To enable targeted legacy applications to take advantage of Cloud-Digital technology, there is an approach firms can follow and use existing platforms. This entails the use of an Openshift (recommended), or OpenStack (not recommended), Private Cloud infrastructure, and automating the deployment into the targeted Public Cloud.
- Open source technology with reduced vendor lock-in. The actual infrastructure can be sourced from the “hyper-3” (Amazon, Microsoft, Google), or even in-house.
- Private Cloud Assembly Lines: Latest build pipeline technology allows automated setting up CI/CD toolchains. Configurations of assembly lines is provided as code and can be version controlled.
- Cloud Dashboard: The business objectives of the Cloud (higher productivity, higher quality, higher speed) tracked in real-time dashboards.
- Ability to run Hybrid applications with a cloud connect service.
Example: On Premise, Openshift to AWS
(source:access.redhat.com/articles/2386731 Redhat, 2016)
Summary of components of using a Cloud ‘Factory’ process:
|Open Stack Infrastructure||A ‘Factory’ can be automatically set-up by Ansible scripts on any kind of Open Stack infrastructure. Regardless whether using a public, enterprise, or private cloud.|
Manual configuration is limited to very basic tasks like for example establishing network connectivity and provisioning of certificates.
Key Technology: Open Stack, Ansible
|Open Shift / Kubernetes||The management of both core and domain specific components is handled by RedHat OpenShift. This component is based on Google’s Kubernetes and Docker which reduces vendor lock-in to RedHat while at the same time gaps in the “enterprise-readiness” of Kubernetes (e.g. high-availability proxies etc.) have been filled.|
Key Technology: RedHat OpenShift, Google Kubernetes, Docker
|Registry Core Services||The core services are the basic building blocks of the Cloud Factory. Core services are used to not only manage the Cloud Factory itself but also to provide reusable services (e.g. GitLab, Jenkins, etc.) across domains. The services are provided as standard Docker containers in a Docker registry with additional metadata which allows an automated provisioning in OpenShift.|
Key Technology: Docker Registry
|Runtime Core Services||The core services which ship with the Cloud Factory are:|
Key technology: Atlassian Jira, Atlassian Confluence, Mattermost, OpenLDAP
|Registry Domain Services||Additional services which are specific to a domain are provided in ether GitLab (Source Code) and / or Artifactory (Deployment Units).|
Key technology: JFrog Artifactory, GitLab
|Assembly Lines||The assembly lines are based on standard CI/CD components:|
It is however important to note that Cloud Factory Lines go beyond the provisioning of best-in-class CI/CD components. The assembly lines could be built with fully automated “commit-to-deploy” templates in JobDSL so that manual configuration is reduced to a minimum and the configuration itself can be version controlled.
Key technology: Jenkins, SonarQube, Selenium
|Monitoring||The monitoring is divided in two areas: the cluster monitoring of the Cloud itself (using Heapster) and the application monitoring (using NewRelic or Nagios).|
Key technology: Heapster, NewRelic
|Logging||A Cloud can leverage the widely adopted ELK stack for log aggregation and visualisation. By configuring custom queries and dashboards, the logging can also provide specific KPIs like transactions times of important business functions.|
Key Technology: Elastic, Logstash, Kibana
|Transformation Dashboard||Firms will need a cloud migration assessment (part of a Cloud Service Orchestration Platform), which helps identify the applications which are most relevant for the migration and can also track the status of the application migration.|
Key Technology: Cloud Migrate templates
|Productivity Dashboard||Productivity is tracked in JIRA, however getting a consolidated overview across all work-streams and comparing planned vs. actual figures requires a lot of custom reporting. With private Cloud Dashboards, this information is available in easy to digest graphical form in real-time. Each item in the dashboard is further supported by a drill down into JIRA.|
Key technology: Atlassian JIRA, HTML5
|Quality Dashboard||The quality dashboard is comprised of two components: SonarQube for measuring the static code quality and a custom dashboard for JIRA to track issues from test and production.|
Key technology: SonarQube, Atlassian JIRA, HTML5
|Speed Dashboard||GitLab Cycle Analytics|
With automatic provisioning capability, the infrastructure allows firms to set-up additional assembly lines whenever the need arises. There is no further postponing of new feature developments, because of test or release windows. An example using Terraform and OpenShift is here.
There are two different scopes in which assembly lines operate:
- Sub-domain (assembly) lines provide a self-contained environment for individual sub-domain (e.g. a single application like a CRM system)
- Domain (assembly) lines provide a means to integrate and stage a new release across multiple sub-domains (e.g. an application cluster like an omni-channel frontend) into production.
- Slow Assembly Line: For releases of “mode 1” applications, following the traditional release plan
- Fast Assembly Line: For more frequent releases. Due to a dedicated assembly line, a wider scope of changes is possible (compared to “hotfixes” or “direct delivery” of individual applications) and comprehensive integration tests are possible. After a release in the fast assembly line, a pull request is sent to the slow assembly line so that all changes are re- integrated.
The network results in the following release timelines. This flexible approach could allow a gradual enabling of legacy applications within targeted Cloud Platforms.
- Level 0: Build automated, the deployment units can be built and verified (unit tested) without human interaction.
- Level 1: Deployment automated, the deployment units can be installed in a non-productive runtime environment and verified (smoke tested).
Note: The deployment in production may still require human interaction as per current governance models.
- Level 2: Test automated, the application can be tested regarding functional, non-functional, and integration requirements without human interaction.
- Level 0: Connect an existing application or service can be connected from the On-Premise Cloud.
(e.g. for static / singular services)
- Level 1: Manual or semi-automated installation in VM, so that individual instances of an existing application or service can be provided for assembly lines with very low efforts.
- Level 2: Fully automated installation in container, so that individual instances of an existing application of service can be provided using the standard private Cloud / OpenShift provisioning mechanisms.
It is however insufficient to measure the progress in the technical enabling of infrastructure and applications. At the end, it is only the gain in productivity, quality, and speed that matters for businesses. Hence, the hybrid-Cloud provides three specific dashboards for this purpose:
- Productivity: The productivity is expressed by the scope delivered per time interval. Moreover, the productivity dashboard also allows to track the budget in agile projects so that the overall budget plans can be met even when business priorities shift.
- Quality: The quality dashboard looks at two areas: the internal product quality expressed by parameters like code quality and code complexity. And the external product quality expressed by defects in test and problems in operations.
- Speed: While speed is somewhat included in productivity, a low productivity does not give any indication regarding the root cause. Hence the speed dashboard measures the individual steps in the assembly lines and gives clear indications where speed can be improved to achieve a higher productivity.
With these dashboards, the improvements can be measured and concrete KPIs can be established for performance-oriented commercial models.