June 5, 2019
TABLE OF CONTENTS
VMware is to blame for a large number of old and unsupported legacy systems we are dealing with in significant organizations, airlines, telcos, banks, government, and more.
In the last few years, we have come across a large number of unloved, and either forgotten or outsourced virtual machine farms, often running business-critical workloads. Some of these farms were built around 2012 and had since been doing a reasonable job at hiding the fact that the operating systems and applications running on top of these farms are out of date and out of support.
In many cases, the hypervisors and hardware were updated, but the virtual machines and applications stayed untouched. This is part of the benefit of running a virtual platform. In previous IT generations, a hardware refresh would have forced an operating system update, and this would have triggered an application refresh.
Now a layer of virtualization hides the fact that the application platform is in urgent need of an overhaul and has not seen much investment in the last five years. The high availability mechanisms built into and around the virtual platform even hide minor hardware outages without affecting the business—this is a “feature.”
The traditional triggers for required upgrades of software and applications are now hidden from the business, and IT spending is reduced.
The way many businesses want to deal with these legacy systems is by moving them to a cloud. The cloud was initially a platform for modern, cloud-native workloads, but it is quickly becoming a target for legacy system migrations and virtual server farm refreshes. For some managers, the cloud is viewed as a new outsourcing provider, and moving workloads to it make the company look more modern and meet specific performance measures.
Various tools support the migration of virtual machines to public cloud instances, like Veem, Cloudendure, or Microsoft’s ASR. They all ship a whole virtual machine, not just the application or data required and are the favorite tool of some system integrators.
When we look at this cloud migration path in more detail, we notice that in many cases it is not even possible to start the old and unsupported operating systems in a public cloud platform. The software or middleware supporting the applications is usually outdated and requires an update too. So a lift-and-shift of virtual system images to the cloud is not a viable option. It is probably a good time to look into that long overdue application update.
When looking at the software deployed a decade ago, we see a few common patterns. An application platform often consists of various parts. A typical application platform is the two- or three-tier architecture:
The presentation and application tiers are usually based upon either a Windows IIS web server running a .NET based application supported by a SQL Server database, or Linux based application architecture often running IBM WebSphere HTTP Server and Application Server, or Oracle HTTP Server and WebLogic supported by an Oracle or DB2 database. If you’re lucky, it’s running on Tomcat or JBoss Application Server, which likely means it has probably been looked at in the last seven years or less.
The custom applications are usually written in Java or C#. The versions are as old as the underlying platforms and have usually been out of support for years. A rule of thumb for these systems is: the more complex the application, the less interest there has been to update or patch the platform it is running on. Many apps integrate with further legacy systems, like messaging, e.g., IBM MQ or Tibco, and identity and access management systems, like IBM Tivoli Access Manager and Sun LDAP backends.
Most of the legacy middleware and integrations are from a time before the public cloud, which usually means that the software licenses are not cloud-friendly and do not allow for application portability to the public cloud. Even if some software is available in the public cloud, for example, Oracle databases in Amazon, the cost can be so high that it is not an option.
So the migration to the public cloud often comes down to a re-deployment of the application to a more modern and more cloud-friendly stack in the public cloud. If there is no time nor interest in re-architecting a legacy application—a redeployment to Apache Tomcat or JBoss / Wildfly middleware backed by an open-source database can be the best and most affordable path forward.
Migrating an application to a different middleware provider requires the analysis of the application source code and the libraries required for the application. There are application migration assessment tools like Windup or Red Hat Application Migration Toolkit to help de-compile and analyze java applications to determine their suitability for migration to Tomcat or JBoss middleware and to a cloud-based platform. It provides a detailed report and highlights areas needing changes.
While analyzing the legacy applications for redeployment, it is worthwhile considering if the application may be container friendly. If an application can be cloud-native and can be deployed automatically, scale horizontally, and is stateless, then the application is potentially a good candidate to move to an orchestrated container environment like Kubernetes. Not every legacy application will become cloud-native.
Recommended: Evaluating Benefits of Public Cloud Providers
Either way, the future deployment should follow modern software development practices, that include a CI/CD pipeline with a certain degree of automated testing, as well as using code to stand up the infrastructure in the targeted cloud environments. Even legacy applications benefit from modern software development and deployment methodologies.
Last but not least the data tier has to be analyzed. This is typically where the actual content and data lives, and it is the core of most traditional applications.
For various kinds of databases, there are free guides and tools on how to migrate the schema and data from the legacy database like Oracle or DB2 to a more cloud-friendly solution like PostgreSQL or MariaDB. MariaDB even comes with an Oracle compatibility mode allowing users to utilize Oracle PL/SQL and Oracle sequences in an open source database. This makes it easier for DBAs and developers to migrate and reuse database code.
The cloud database infrastructure will need designing to meet availability requirements or a cloud provider managed database can be used; these usually have high availability options built in. The primary public cloud providers offer free tools to help migrate the database and the data to their respective managed database service, or a virtual machine running a cloud-friendly database.
Once migrated there will be requirements for system management capabilities in the cloud. Backup and restore capabilities either using cloud-native or third-party software, patching solutions, and identity management requirements are required in the cloud, too. These capabilities should be built and designed before migrating critical legacy applications to the cloud.
We should remember that software in the cloud still requires a lifecycle. If not, we will in future blame the cloud for a large number of old and unsupported legacy systems we will be dealing with in significant organizations.
At the end of lunch with a mentee, I used the items on our table to express the fundamental concepts of Kubernetes. Sometime after explaining the purpose of the Kubernetes scheduler, she asked a question I spent the next several weeks thinking about.
API design is crucial, giving structure to application interaction. Given cross-functional teams and applications, development time is reduced with a clear, intuitive way to access data. API development often follows two approaches: REST and GraphQL.
As of June 2018, the state of California passed a new privacy law that could lead to more consequences for US-based companies than the European Union’s General Data Protection Regulation (GDPR). Here's what you need to know and how to be compliant.
Before your data scientists wring value out of your reams of data, it has to be accessible and, on some basic level, coherently arranged. To harness all that brainpower, you need to keep the data wrangling to a minimum. Enter the data lake.