The four stages of your Public Cloud journey

 

Depending on who you believe, hype around the cloud is either reaching the "Peak of Inflated Expectations" (where early success stories capture lots of media and industry attention), or is already slipping quickly towards the "Trough of Disillusionment" (where early deployments fail to live up to expectations and the industry begins to re-evaluate their approach). To further complicate matters, Gartner currently place advanced cloud services like serverless PaaS on the upward slope of the hype cycle. Whichever is correct, businesses are falling over themselves to declare they have "cloud first" IT strategies in place.

Gartner predicts that through the course of the next year, "90 percent of organisations will lack a postmodern application integration strategy and execution ability, resulting in integration disorder, greater complexity and cost".

Experience seems to confirm Gartner's conclusion too, mainly because there seems to be serious confusion about what "cloud first" actually means in terms of strategy.

As they first enter cloud computing, most businesses assume that simply replicating their existing infrastructure in the cloud is the fulfilment of their strategy. But these businesses are still a long way from realising the full operational and financial benefits of the cloud.

In our view, there are four stages in the typical journey towards building a truly sustainable IT infrastructure in public cloud. Which stage has your business reached?

Stage One: Replicating IT in the Cloud

As mentioned earlier, initial cloud deployments focus on trying to replicate their on-site infrastructure on a cloud platform. Understandably, many think this is also the end of the process; not only are they now proud "owners" of a hugely familiar platform that they understand inside and out, but they also solve one of their most pressing problems – capacity constraints.

Businesses can spend years in Stage One, simply scaling outwards as demand for processing and storage increases. And because everything is so familiar, the DevOps model allows businesses to continue as normal, maintaining the same level of productivity and efficiency they have always enjoyed.

But if a business remains at this early stage of their cloud-first strategy, they may find that early successes are not carried forward into the future.

Stage Two: Rebuild and automate

Also known as the "What happened to the cost savings we promised?" phase, Stage Two is only initiated when the CFO begins asking difficult questions. Although the organisation may have cut capital spend on hardware, cloud costs continue to spiral - and no one is entirely sure why.

The problem is that the limitless scalability of cloud may cause unforeseen problems for inexperienced cloud developers and systems engineers. Developers can make use of as many cloud vendor services as they like, but all are billed on a pay-as-you-use basis. Which means many allocate many resources inefficiently or unnecessarily.

It's no coincidence that Gartner's "Trough of Disillusionment" coincides with businesses running into problems in Stage 1 forcing a re-evaluation of cloud deployments and strategy. As David Stanley, head of platform delivery at Trainline said of his company's public cloud experiences, "The bigger cultural change to go through after you've made all of your DevOps changes is to focus on costs."

Chastened by a severe rebuke from the CFO, the IT department begins re-engineering systems to align them with the various rules by which public cloud costs are calculated. For instance, by creating users and groups, businesses can control who has permission to request public cloud resources. Immediately systems become more efficient in terms of operations and cost.

As their experience with cloud technologies deepens, developers also increase the level of automation used for their hosted systems. As an example, in a return to the batch processing principles of old, virtual machines will be spun up during off-peak hours to reduce operating costs. The AWS Auto Scaling feature for instance, allows developers to limit resource scaling based on a pre-defined schedule, ensuring resources are available according to predictable demand and released outside those hours to bring spend in line with use.

With systems redesigned for the cloud, and automation used to streamline operations, the CFO will be much happier as bills come back under control. Spend may still have to increase, but this rationalisation process ensures that costs are better contained and fully justifiable.

Stage Three: Containerisation

Although costs are now under control, the hosted infrastructure is still relatively resource intensive when it comes to management. At the end of Stage Two, applications still reside within virtual machines installed in a hypervisor, which itself sits on top of the cloud layer.

Many of these virtualised systems will be set-and-forget, but that's to downplay the work involved in configuring them at the point of deployment. There is also the reality that wide-scale configuration changes will be needed at times – and the IT team will need to carry out the work to ensure that systems continue to operate as expected.

With applications, binaries and libraries, guest OS and a hypervisor installed on top of the host operating system, there are multiple levels that need to be managed. This is where Stage Three begins, with the goal of simplifying application architecture, reducing management overheads and cloud resource usage.

Using a container engine like Kubernetes or Docker installed directly on top of a guest operating system, developers can re-engineer applications without the need for a hypervisor or virtual machine. Code is compiled and run in a "container", containing nothing more than the dependencies required for the application. This offers several advantages:

  • The containerised application is more lightweight, helping to reduce demands on server resources and running costs
  • The application becomes more portable, allowing it to be redeployed in another cloud service with minimal effort
  • Developers can focus entirely on building an application or service, without having to worry about the operating system or other secondary factors that complicate the process
  • With fewer layers to manage, administrators and engineers are freed to focus on other strategic projects.

As a bonus, a further reduction in demand on server resources also helps to lower costs further, which means even less hassle from the CFO.

Stage Four: Truly serverless computing

By the end of Stage Three, computing resources have been dramatically streamlined, reducing application footprint and operating costs, and increasing portability of code. Again, the temptation is to assume that the cloud-migration process is complete, but there are still more improvements to be made.

At this point, applications are built into standalone containers, but the Docker platform on which they are built still consumes server resources within the cloud vendor’s data centre. In many cases, developers are running complete virtual machines inside their containers. This is not necessarily a bad thing but applications can be further streamlined, which in turn reduces development effort and cost.

More concerning is that by retaining an OS inside the container, the environment retains the same administrative and security overheads. Initial efforts may provide for a greater density of virtual machines, but including the OS in a container does little to reduce maintenance complexity – and operating overheads.

The final stage of the cloud-first migration is to develop and deploy "serverless" applications. As the name implies, these applications are designed to scale back reliance on Kubernetes, Docker and cloud servers as far as possible. Instead of drawing on fully compiled applications and virtual machines, serverless applications are built using containerised libraries and binaries hosted cloud platforms and linked with APIs provided by public cloud services.

Put simply, serverless applications take the building blocks provided by various online services and "glue" them together. Essentially, Stage 4 applications take advantage of "Function as a Service" to reduce costs and overheads.

These applications are still infinitely scalable, drawing on resources from third parties when required, but with minimal local footprint. Linking services in this way increases the speed of development further and reduces the level of maintenance required to cope with updates to the services being used.

And because your new applications rely on SaaS-level services provided by third parties, there is no need (or indeed function) to adjust configuration of components like httpd or MongoDB – these functions are handled by the third party provider. There is always going to be an operational overhead for cloud-based applications, but by outsourcing infrastructure and OS responsibilities to external service providers, the internal running cost is reduced further still.

Time to assess your progress

With the cloud-first journey mapped out, it becomes much easier to assess where your business has reached. Many late adopters will still be around Stage One, meaning they have some distance yet to travel. Most organisations will have reached Stage Two however, investigating how to best use cloud infrastructure to improve their own operations.

Cloud computing is a dramatic shift in corporate IT, and many are still learning how to take full advantage of the benefits provided by public cloud. The skills shortage is compounded by the rapid pace of AWS, Azure and Google Cloud Platform, platform development – so it is little surprise businesses are struggling to reach their cloud-first goals. Especially when true cloud-first strategy remains poorly understood.

Claranet is the highest accredited multi-cloud provider worldwide - an active member of the Amazon Partner Network, having achieved both Premier Consulting Partner and Managed Service Provider status; Google Cloud Partner Program and has achieved Premier status and the Partner Specialisation in Infrastructure; Microsoft Gold Partner; and Kubernetes Certified Service Provider.

Find out more.

Previous Article
Beware Malware: created, morphed, and now ready to hit hard
Beware Malware: created, morphed, and now ready to hit hard

Created, morphed, and now ready to hit hard

Next Article
Plan Your Route: avoid migrating mailboxes to O365 the hard way
Plan Your Route: avoid migrating mailboxes to O365 the hard way

Avoid travelling the hard path to O365.