Move to cloud & culture DevOps
“Move to cloud”!
This objective can be found in companies that may be heterogeneous but are faced by common issues:
- Getting away from the problems of maintaining physical hardware (hard disks, physical machines) that are not part of their core business
- Being able to exploit technologies that they do not want to reproduce on their existing server base
- Avoidance of physical problems (disk space, for example) while keeping costs under control.
While these objectives are achievable, two key elements should not be underestimated:
- Increasing the knowledge and skills of teams in new tools and concepts.
- The pragmatism required to take account of the existing situation, which is often incompatible in part with these new concepts. For example, it is not uncommon to discover that an application developed within a monolith cannot be properly supported with hosting managed by Kubernetes and Docker. The cost of adaptation is too high, so compromises have to be made to ensure the migration is successful.
Kaliop is familiar with the main cloud solutions on the international and French markets (AWS, GCP, Azure, Scaleway) and has extensive experience of specific migrations. So, don’t hesitate to call on us if you have any strategic doubts about making your move to the cloud a success!
The promises of the cloud
Robustness
The cloud’s ability to offer a robust and highly available infrastructure is one of its key assets. Cloud services all offer a high availability rate, with a contractual level of 99.9% or more for their services. This means that applications and data are accessible almost all the time. Recovery time is better guaranteed. Backups are almost always active by default, with optimised recovery processes.
Also, the security aspect is not forgotten. Cloud services generally incorporate advanced security protocols to protect sensitive data against cyber-attacks. This includes advanced firewalls, intrusion detection systems and other advanced security measures.
In the case of international companies, geographical redundancy is also provided by the main operators, spread around the world. It becomes even easier to optimise your infrastructure in relation to your geographical needs.
Speed of adaptation
Clouds enable faster new service introduction.
In a cloud environment, IT resources such as storage capacity, computing power and memory can be obtained almost instantaneously in an on-demand manner. By comparison, obtaining additional resources in a physical environment can take significant time due to the need to purchase, deliver and configure additional hardware.
Cloud service providers make new features and technologies available quickly, enabling businesses to adopt recent innovations straightaway without having to invest in expensive hardware or spend time configuring it.
Automated deployment tools are developed alongside leading cloud solutions. Applications can be deployed quickly and consistently in a cloud environment. This significantly reduces the time needed to make new functionalities or entire applications available to end users, accelerating time-to-market.
Offer
Every major cloud provider offers dozens of different services. While their initial mission was to extend the possibilities of physical hosting, by offering resources that disregard physical limitations, there are now a huge number of off-the-shelf applications that are virtually ready-to-use. Examples include everything to do with data, artificial intelligence, deployment tools, serverless solutions, etc.
This offer is constantly evolving, with significant resources invested in research and development.
The downside of this cloud versatility
Ongoing learning
As we saw above, the main cloud players are constantly innovating. But they are not content just to add new services, they also continue to develop existing ones. You can’t master everything that’s on offer, but you need to keep an eye on things to take advantage of opportunities that are relevant to your own needs. On the other hand, you have to anticipate that major changes may appear. Even if they are rare, services may evolve in such a way that backward compatibility will not be maintained.
Even if the necessary steps are generally taken to accompany and explain the updates, the impact cannot be taken lightly.
So you need to allow time not only to develop your skills, but also to keep them up to date.
Keeping costs under control
The complexity of cloud pricing models can make it difficult to accurately predict long-term costs, which can lead to excessive or unexpected expenditure. As we saw earlier, it’s quick and easy to change the services used and their configuration. This is why it is necessary to adopt a corresponding “finops” approach.
Expenses are generally linked to the use of resources. Therefore, it is essential to regularly monitor the consumption of cloud resources using monitoring and cost management tools to identify anomalies and sources of waste. Plan spending needs to take account of inconsistent aspects of resource usage, and avoid costly surprises.
Although the above can be viewed negatively, cloud solutions also have opportunities for optimising resources that have a positive impact on costs. Managing the mothballing of unused resources, using reservations for instances and selecting appropriate instance types can significantly reduce operational costs.
By understanding cloud providers’ pricing models, businesses can choose pricing packages tailored to their specific needs, which can reduce unnecessary costs and optimise the use of cloud services.
Organisational changes
Migrating to the cloud often involves significant organisational changes, particularly in terms of roles and responsibilities within IT teams. Establishing a culture focused on the cloud and cross-functional collaboration is essential to the success of this transition. Indeed, the impact on an IS is profound because work habits relating to architecture, data management, security and so on are all changed.
Implementation cycles are no longer the same, with certain actions no longer requiring the time that it was previously impossible to reduce. Even the principle of the lifespan of a service must be reviewed, because it can be cut off immediately given that there are no hardware issues to manage.
Finally, tools will have to adapt to the cloud, and this generally means introducing new tools, and this too will also have an impact on the existing organisation.
DevOps
What is DevOps?
Given these challenges, a DevOps approach is proving an effective solution for ensuring the smooth management of operations in a cloud environment.
It’s important to point out that DevOps is not a profession in its own right, but rather a cultural and organisational approach that fosters collaboration and integration between software development (Dev) and IT operations (Ops) teams. However, there are specific roles and responsibilities linked to the practice of DevOps within an organisation. Listed here is a pair of roles linked to DevOps:
- DevOps Engineer: these professionals are responsible for implementing DevOps practices, including automating development, testing and deployment processes, as well as managing continuous integration and continuous deployment pipelines. They are responsible for ensuring effective collaboration between development and operations teams, while promoting the adoption of agile development tools and practices.
- DevOps system administrator: these professionals are responsible for managing and configuring IT infrastructures, ensuring that systems are optimally configured to support agile development practices and automation processes. They work closely with development teams to ensure stable and reliable development and test environments
Although DevOps is not a profession in itself, it has given rise to new roles and job titles that have become increasingly common in organisations focused on digital transformation and the adoption of agile software development practices.
Why is this necessary in a cloud environment?
In a dedicated architecture, resources such as network engineers and operating systems are essential. In a cloud architecture, the same requirements apply. You still need to manage networks and services, which have their own specific configuration, with monitoring and follow-up requirements.
Due to the nature of the tools used, development resources have a better understanding of what is available to them, particularly as they can potentially reproduce it locally using technologies such as Docker.
But make no mistake, even though your development teams may have a good understanding of the architecture this does not mean that they have mastered it completely. This understanding is a success factor for the smooth running of your applications, which is already a great achievement, but you won’t be able to do without DevOps skills.
The technologies involved
- Terraform: Terraform is an open source tool for managing infrastructure as code. It allows DevOps teams to describe their cloud infrastructure in configuration files, making it easier to deploy and manage resources in heterogeneous cloud environments. What’s more, this code will be managed within a management tool such as Github or Gitlab, with the same uses as those found in development teams (branches, code reviews, etc.). Unit testing also becomes possible. And this use guarantees the ability to recover quickly from a major production incident, because everything can be rebuilt from this code.
- Docker: Docker is an open source platform for creating, deploying and managing applications in containers. It facilitates the rapid deployment and management of applications, guaranteeing portability and consistency between development and production environments.
- Kubernetes: Kubernetes is an open source container orchestration management platform that makes it easy to deploy, scale and manage containerised applications. It enables DevOps teams to efficiently manage complex workloads in large-scale cloud environments.
- Prometheus: Prometheus is an open source monitoring and alerting system designed to monitor cloud resources and applications in real time. It collects performance metrics and monitoring data, enabling DevOps teams to quickly diagnose and resolve performance issues.