What 19th century railroad wars can teach us about building a future-ready cloud
Sr. Director of Product Management, Cloud Runtimes
Just like railroads of yore, if you want to build for the future, faster and with fewer vulnerabilities, you’ll want the standardization found in containers.
Like most companies today, you probably have existing applications in your technology arsenal. While I avoid using the word “legacy” too often, the term fits here.
Normally, a legacy denotes something good left behind — an inheritance or a meaningful contribution. But in IT, describing systems or tech as legacy suggests something outdated, something that might slow you down or inhibit growth.
So what can you do to ensure your technology legacy is something that builds for the future, not a burden from the past?
By now, you’ve likely heard something about containers and how they’re revolutionizing application development; you’ve probably heard less about the business benefits they deliver.
While containers have been around since the 1970s, the rise of Docker, cloud computing, and container orchestration platforms like Kubernetes have transformed containers into a key technology for making enterprise IT more flexible. A 2020 annual survey by the Cloud Native Computing Foundation found that container usage in production had jumped 300% since its first survey in 2016. This trend has continued and the foundation recently estimated that 96% of organizations are now either using Kubernetes open-source container management system or evaluating it.
One of the main benefits of container technology cited by these organizations is the ability to accelerate development by making it easier to move software across multiple environments. Another major feature is also a critical ingredient for digital transformation — standardization.
Standardization, or a lack thereof, is a problem almost as old as technology itself, and I don’t just mean digital technology. One of my favorite examples stretches back almost two centuries, to the dawn of the railroad.
Outdated technology takes you off the rails
In the 1840s, competing railway operators in Great Britain chose two different track gauges (narrow versus broad) for their areas of service. This created huge inefficiencies at junctions where gauge sizes didn’t match.
Passengers regularly faced the unpleasant chaos of changing trains, sometimes more than once, hauling all their luggage along the way to accommodate the abrupt gaps between incompatible rails. Goods and other shipments also had to be transloaded (unpacked from one car and reloaded into another) at junctions, creating long delays and higher costs for businesses.
As the railways expanded into new territory, the issues became so disruptive and costly that an argument ensued over choosing an official gauge that would allow trains to run without interruption. The “Gauge War” was driven by business profit and market control, but standardization is its enduring legacy.
It took an act of Parliament in 1846 to settle the matter, mandating 4 feet 8 ½ inches as the standard gauge size for all new railway tracks in Britain — a change that would decades later enable any locomotive and its freight to run on multiple lines, facilitating smoother transport for passengers and shipments.
The same inefficiency exists today in companies with older systems, even those already running on virtual machines in on-premises data centers. In the past, development focused on custom ways of doing things. Applications were purpose-built to fit your company’s needs and run in your own data centers, and that was usually as far as they would — or could — go.
Within a bank, for example, the equities group could be using a completely different technology from the wealth management group, which is also different from the technology used to build the customer-facing consumer banking system. These differences might not initially be deliberate decisions, but they build up over time due to organizational culture and departmental divisions.
While these systems get the job done, they require more work and lead to fragmentation. Eventually, friction between systems can arise within the organization, and especially beyond it, in an increasingly networked and connected economy. Instead of gaining momentum and forward velocity, enterprises often find themselves hindered by what are effectively “gaps between the rails,” with no direct connectivity between data, systems, and resources.
Since every system works slightly differently, developers spend most of their time trying to achieve the speed, interoperability, security, and scalability needed to retain customers and win new business. Many teams end up investing huge amounts of resources to keep everything running smoothly, building custom integrations, developing custom APIs, and refactoring systems so they can communicate with each other.
Why containers help lay tracks for the future
With increasing pressure to adapt quickly and meet customers wherever they are, companies are realizing their older development approaches aren’t standing the test of time.
When L.LBean set out to improve its omnichannel shopping experiences, the family-owned company wanted to remove as much friction as possible for customers and reach them on multiple channels, whether print, physical store locations, website, app, or social media. But while its duck boots and flannel are as iconic as its mail-order catalog, L.L.Bean’s older, on-premises IT systems were not.
The retailer relied on separate applications, many of which had been around for about two decades, to support specific channels, and each performed in its own way. These loosely-connected systems, combined with an on-premises IT environment, made it cumbersome and expensive just to carry out upgrades or add more computing capacity — let alone deliver dynamic, omnichannel customer experiences. Adding any new functionality often required sticking to a rigid, eight-week release cycle that made it difficult to experiment and deploy new features quickly.
Similar to railroad standardization, adopting containers allows teams to move forward with less disruption by providing a universal standard to package code and deploy it reliably in any environment. They are designed to work at every step of the development process, so you can iterate faster from problem to code to test to production. No matter what platform you choose, a container remains the same.
In the case of L.L.Bean, adopting containers and a managed Google Kubernetes Engine (GKE) environment for deploying containerized applications streamlined the process for getting new development up and running and making upgrades. It also made it possible to integrate data from multiple source systems easily, scale capacity up or down during peak shopping times, improve website and mobile performance, and deliver new capabilities and features to customers faster.
Similar to railroad standardization, adopting containers allows teams to move forward with less disruption by providing a universal standard to package code and deploy it reliably in any environment.
Drew Bradstock, Senior Director, Google Kubernetes Engine
Instead of wasting time on bridging gaps, containers make it easy to build whatever you need, wherever you want. You can run them in any environment and tap into the latest development tools, modern languages, and a thriving ecosystem of products and services to solve current and future business problems. If you choose a managed container platform, you also have access to expert support from the cloud provider should questions or issues arise.
Ultimately, diverse standards in technology inevitably create a gap that slows development and blocks innovation. There’s no ability to reuse people, skills, or components that allows your organization to accelerate turnaround cycles and respond to change faster to stay ahead of the competition.
Like railway lines, you can’t reach real velocity without consistency.
Connecting lines to innovation destinations
Standardization is one of the main advantages of containers, but it also helps set off a chain reaction to drive true cultural change. Instead of using technology to support existing methods, containers act as a transformation point to remove barriers to development, automate away manual processes, and continue scaling without having to rebuild continuously.
Imagine what’s possible when the time between releases goes from months to weeks or even days. Applications can be scaled up to manage peak events and back down to minimize costs. Developers can leverage ready-made solutions instead of building systems alone. You can attract a new wave of talent with leading skills that want to work with new tools, open-source technologies, and advanced capabilities.
It really has the power to change your entire approach.
Univision scaled up to handle billions of viewers watching the FIFA 2022 World Cup on TV and online.
PGS used GKE to replace its two on-premises supercomputers, running workloads with over a million virtual CPUs. At peak capacity, its GKE supercomputer can run with a hypothetical 72.02 petaFLOPS capability — the equivalent of the world’s 7th largest supercomputer.
Sabre has adopted Site Reliability Engineering (SRE) practices to accelerate digital transformation, leveraging microservices and container technologies in GKE and managed cloud databases to process over 12 billion shopping requests from over a billion travelers.
These are just a few examples of the success I’ve seen our customers achieve, and we are only starting to scratch the surface of the future of container technology.
What you’re doing today might still be working, but there will come a time when it doesn’t anymore. Given how containers are driving the next-generation of innovation — do you want to get on board or get left at the station?