Skip to Content

Cloud native on -premises: why is this the best way to start?


If the Seattle-based company has been doing a great job convincing us that “Cloud native is AWS,” others including the Cloud Native Computing Foundation have worked to extend this to an overall definition: “Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” From this definition we learn that cloud native can’t be limited either as an AWS technology or a public cloud one. Moreover, the CNCF lists some technology components that are related to cloud native: “Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.” These technical components allow us to build applications in a new way, where systems are resilient, manageable, observable, and scalable. When we are talking about Function-as-a-Service, serverless, Database-as-a-Service, or any other cloud-native technologies, we are, in fact, talking about many things that are hidden behind your provider’s service catalog.

Cloud native: a DevOps point of view
To achieve this goal, you first rely on container technology, which gives you small resilient components you can easily manage and observe. This container technology could be from AWS, Cloud Foundry, Windows, or Linux. In the Linux world, Docker has been particularly successful because it is natively cloud-platform neutral. Can you run cloud-native technologies with Docker? Actually, yes, but in front of the technological complexity you need a fully automated platform.

Containers need a set of different technologies to run. First, you need an Infrastructure-as-a-Service (IaaS) to consume on-demand CPU, RAM, and storage, all of which must be accessible through APIs or native integration. After you have selected the best IaaS provider, you still need to do automation to run your different environment and build the links in it. In the DevOps world, this environment automation is called continuous delivery. Using open source tooling or those given by your IaaS provider, you can now build a delivery pipeline from development to production.

The story could end here, but this is not enough. Due to the complexity of cloud-native applications, you must run multiple containers for one final app. This can’t be done easily when you have 1,000 containers. That’s why you need a container orchestration platform.

This platform can be provided by AWS, Azure, or Google or you can choose to bring your own. Open source projects have been created for this purpose – the well-known Kubernetes or Cloud Foundry for example. Kubernetes, which was created by Google, is leading the way, even if some companies still use Cloud Foundry, Mesos, etc. Without going into too much detail, Cloud Foundry is a ready Platform-as-a-Service whereas Kubernetes still needs some improvement. This is when DevOps come to the fore once again with what is called continuous integration – basically, a set of tools to run your code all along the software development lifecycle. Some of these technologies are famous, for exmaple Git with Gitlab and Github, Code review with SonarQube, repository with Artifactory.

To summarize, in the DevOps world when people are talking about cloud-native technologies, they refer to an app where each function runs into a container and these containers follow a DevOps pipeline to move into production.

Figure 1 – Cloud native: a DevOps point of view

Cloud native on premises: the first step toward a public cloud

Now you understand how you could theoretically reproduce this cloud-native setup in your own infrastructure. You also understand that this is going take some effort. Thankfully, some products are already on the market to support you on this journey. OpenShift is the one I like most. In a future post, I will give you some reasons why I like it so much.

In conclusion, starting a cloud-native project on premises has many advantages:

  • Using your current infrastructure: mitigate your risk

The biggest challenge in moving to the public cloud is extending your current network and security policy outside the company. Starting with on premises allows you to use your current infrastructure and just extend natively with a container orchestration platform.

  • Try and learn

The learning curve is particularly important when you start. Even if instinctively you think that starting on a cloud-native platform is faster, this is not always the case. Internal policy, network configuration, operation model, could really slow your project down.

  • Managing your costs

Many top organizations move backwards on public cloud adoption because they underestimate the total cost of ownership and compliancy of the cloud platform. On premises allows you to manage your cost effectively and prepare your organization to move to the public cloud step by step.

  • Hybrid cloud strategy

In conclusion, starting on premises is opening the way to move to the public cloud soon. Meanwhile, your development teams will learn from your internal platform and your infrastructure teams will do the same on public cloud availability. Once you are ready, the burst from private to public will really just be a detail, particularly if you choose the right orchestration platform from the beginning.

At Capgemini, with our partner Red Hat, we are used to supporting our clients in their cloud-native journey, from Apps development to Infrastructure setup through DevOps.

Learn how you can start your journey to a cloud-first way of working with Cloud Native powered by Red Hat.

We would be happy to contribute to your success.