Information technologies have passed more than one big transformation and a lot of small ones during the time of their never-ending development. The software development process is constantly upgrading and changing. The appearance of microservices architecture, the independence of projects from one cloud provider or its distribution among several clouds, and the emergence of serverless applications and databases keep the process of transformation going. The development process is dynamic and changeable and it requires monitoring, analysis, forecasting, data processing, and, of course, automation of repetitive processes and tasks.
Services and microservices. A trend or a standard?
Microservice architecture has become a basis of the DevOps methodology and a timeless trend. In short, its sense is to divide the development of a monolithic product, complex application or software into many single-purpose components or modules that could be quickly changed, modernized, or improved almost independently of each other, without harming other modules and microservices. Thus, it allows flexible and continuous development and updating of the main application or software, instead of spending time and effort on approval and debugging of bulk and monolithic releases.
It is worth noting that the modular approach also can be used in monolithic architecture, and the SOA (service-oriented architecture) principle is also focused on services. In contrast to both in microservices, the entire program consists of a number of separate single-purpose services, while SOA is a group of services , which act mostly in concert to support the program and its deployment.
In practice, especially during the transition to the microservice architecture of an already working software product or application, the DevOps team often has to combine different solutions, practices, and approaches based on their own experience. The Serverbee team also uses the principle of microservice architecture as a basis, because it has proven itself well. In most cases, it speeds up the development, makes it easier, improves the security of the product and the development process, helps to avoid many errors, and improves scalability and reliability. Good practice and experience allow you to achieve the best results by skillfully using this approach.
Serverless - calculations or code are performed as a service in the platform environment
Serverless is a cloud computing service. It allows you to make some calculations, or run a code, program, subroutine, databases, without installing and configuring a server, services, and components. At the same time, the cloud provider provides you with an environment in which your code can work as quickly as possible, and monitors the allocation of resources and the elasticity of their consumption. Serverless computing executes application logic but does not store data, instead, it runs in short sessions and its results are already written to memory. Resources are allocated only if the program is launched and used, and the use of resources is charged.
In practice, serverless code can be used in combination with traditionally deployed, this simplifies the process of deployment in production. It is appropriate if you need occasionally to run and/or test code or some functionality in addition to the main solution outside of the main infrastructure. Also, serverless applications can be generally temporary in nature (they can be written as serverless) and then deleted after the end of time or the required action.
A serverless service is usually offered as a function (FaaS) and actually runs in a containerized environment. However, if an application or feature is required all the time or the number of these features is growing steadily, this can seriously increase the spending budget, and it is better to move it all to the containers of the main infrastructure than to continue using the serverless feature. Also, a good compromise is a serverless solution from Knative, which allows you to deploy the launch of serverless applications and code based on K8s. Our company's engineers can help implement Knative to your Kubernetes.
Cloud-agnostic - not in a cloud, but among the clouds
Before choosing a cloud environment for their project, many companies make a detailed analysis of capabilities, features, pricing, and regional allocation of cloud provider resources. However, with time, no matter what cloud environment you chose for the main part of the project, there appear many tasks that have several alternative solutions. They can be simpler, cheaper, more optimal, and more convenient in an alternative cloud environment or in a local structure. By distributing the project in several cloud environments, the advantages of this approach become obvious:
- you do not depend on the marketing and pricing policy of one service provider and you can always transfer part of the structure to another cloud or even a local structure;
- you have access to all special features/functions that are available only from a specific provider;
- you can easily and quickly move parts of a project between clouds or on-premises structures when it's required without rewriting code or scripts.
This is the cloud agnostic principle in action.
As formulated in VMware Suite:
"...In the strictest definition of the term, cloud-agnostic tools, services, and applications can be moved to/from any on-premises infrastructure, and to/from any public cloud platform, regardless of the underlying operating system or any other dependency…”
So such tools as Docker, K8s, Kubeflow, and MiniKf are cloud agnostic. But when we talk about Terraform, it is more like a multiple-cloud support utility. It is often used to deploy infrastructure in any cloud, but each cloud has its own terms and nuances of configuration, and often working in a multi-cloud environment you have to use ready-made Terraform configurations for various well-known providers AWS, GCP, ASURE, etc.
Terraform – the most convenient IaC master
Terraform was created to manage infrastructure as code (IaC) in any cloud and multi-cloud environment. Using one easy-to-understand HCL (Hashicorp language) declarative language with Terraform in any cloud or on-premises, you can quickly create your automated scripts for infrastructure deployments of any size and complexity and easily manage them using one tool everywhere.
Using Terraform, you save a lot of time by automating repetitive manual actions that you have to do using, for example, Google Cloud Deployment Manager and AWS CloudFormation. Also, when several team members are working together with Terraform, it blocks the state of resources during the deployment, warning others about an unfinished process. Thus, each team member can work with the current version of the infrastructure, and the loss of changes made by each team member is excluded.
Of course, the DevOps methodology is evolving and new tools for managing the infrastructure as code appear. For example, Pulumi works not with declarative programming languages but with conventional procedural ones. And if Ansible sometimes is used in connection with Terraform to improve configuration management control, then Pulumi, they say, retains all the advantages of Terraform, but has the capabilities of programming language. During the creation and modification of resources, it takes into account all configuration parameters and can perform iterative operations (if necessary, start a group of resources).
However, despite these opportunities, Pulumi, which has recently gained recognition as a new cloud-based IaC orchestration tool, has not yet gained as much popularity and support as Terraform. Although it is too early to make conclusions, its development and adoption are still quite slow, and Terraform, due to its convenience and widespread use in the cloud, as well as the mass of documentation and its translations, will remain in the DevOps trends, at least for a few more years.
Docker container - a format that has become a standard
Docker is the most popular containerization tool. This container format is used by most cloud services. The appearance of Docker made a real revolution in containerization, making it the most convenient and popular among software developers. Today, Docker containers run in almost any cloud or on-premises. Docker is minimalistic, but it has everything you need to run your application or code, including databases, a web server, and all the necessary components and libraries. The easy portability of Docker images that work locally and in any cloud - ensured their widespread use.
But the rapid growth of the number of containers eventually requires an effective mechanism for their orchestration, because Docker containers quickly became the corporate standard for deploying applications. Therefore, the company developed the same minimalistic and simple orchestrator as Docker - Docker Swarm. Unfortunately, despite its easy configuration and management, Docker Swarm did not gain the same popularity as Docker containers, and the market chose a more powerful and functional orchestrator from Google - Kubernetes!
Today, Docker's container runtime - containerd (container launch environment) is no longer part of it (Docker has become modular), and Cloud Native Computing, which continues the development and support of containerd, has declared it an open standard for any cloud platforms and various OSes.
In addition, in support of the OCI (Open Container Initiative) container standards, but independent of this project, Red Hat created its "Container Runtime Interface" - CRI-O and continues to promote it. What's more, CRI-O was originally intended for use with Kubernetes, called "an implementation of the standard interface for an executable environment for containers in Kubernetes", and was hosted in its incubator.
Therefore, you can run Docker containers in Kubernetes both using containerd and alternative container runtimes, such as, for example, CRI-O. And although the open source community has other solutions for managing containers (the description of which is beyond the scope of this article) the use of Docker containers in the corporate environment today remains the #1 trend in both software development and cloud infrastructure container orchestration.
Kubernetes - Do Google's orchestrator alternatives have any chance?
The software development industry's gravitation towards microservice architecture has led to the emergence of a large number of container clusters. Management of their structure, automation of deployment and management of containers, monitoring and quick response to events occurring in the internal space, and between the clusters in the cloud, of course, require a reliable mechanism for their orchestration.
Today, Kubernetes has become, de facto, the most popular container cluster orchestrator on any cloud or microservice architecture. Even if you, for some reason, use any other orchestrator, e.g. Docker Swarm, Apache Mesos, OpenShift, etc., in case of rapid growth in the number of container clusters, and significant branching of your infrastructure between different cloud services, you will need to bring the structure of management and automation to homogeneity and unification with a single powerful orchestrator. And you won't find a better multi-cloud orchestrator, because GKE (Google), AKS (Azure), AWS EKS (Amazon), are different faces of the same person - Kubernetes!
K8S automates all operations with containers. It deploys just as many containers to run the application as it needs and monitors resource usage. If the need for resources increases, Kubernetes allocates them automatically, whether it is a disk or address space, RAM, or additional worker nodes. When the need for them disappears and the activity of the processes decreases, K8S frees idle resources, scaling the infrastructure of the project according to its needs. This automates the deployment, scaling and management of container infrastructure.
In addition, K8S also automates the management of some functions specific to the container cluster. For example, it provides:
- service discovery and load balancing, which creates an endpoint - (a central endpoint, e.g. domain, port, etc.) for an external request to a network of application modules and distributes the load between the modules of this network;
- storage orchestration - K8S also provides connection of any local or cloud disk/storage to one or more containers;
- automated rollouts and rollback - Kubernetes can automatically update or cancel the update to the previous state of Docker Images;
- automating bin packing - you can specify to Kubernetes the number of processors, RAM and copies of new Docker Containers that you want to create in your cluster, and it will automatically distribute new containers in the best way, in unloaded Worker Nodes, avoiding imbalance in existing infrastructure;
- self-healing - K8S always monitors the performance of running containers and in case of failure - automatically restarts them;
- secret and configuration management - K8S stores passwords and other secret information in a secure place, outside containers, and applications. This allows you to protect important information from unauthorized access and theft of confidential data.
Such powerful functionality, absolute compatibility, and multi-cloud support of this orchestrator, left Kubernetes practically out of competition in comparison to other solutions, especially of cloud agnostic orchestrators.
DevSecOps - security at all stages of development
Today, it became more difficult to ignore the security issues in the development and exploitation of the software. Everyone uses mobile phones, computers, and gadgets — these devices have the most diverse software and their sources are also different. Also if we talk about the software of medical equipment and its security, it is also difficult to overestimate it.
DevSecOps monitors compliance with information security standards.
The current international information security standard ISO/IEC 27002 describes three main principles that determine the state of software security.
Information security is a state of security of data processing and storage systems, in which confidentiality, availability, the integrity of information are ensured.
(quote from Wiki)
How DevSecOps can ensure a proper level of information security?
First of all, ensure timely verification of software at individual stages of development.
IAST Interactive Architecture Analysis.
Allows you to detect architectural vulnerabilities. Primarily, the SecOps specialist operates in the space of DevOps methodology, which includes the principle of microservice architecture and container infrastructure, where confidential and secret information will be stored separately from containers and access to it is very limited. For example, Kubernetes works in such a way, as was mentioned earlier.
A SecOps specialist analyzes third-party libraries. For example, libraries may not always have open source code, and trust in their quality depends on trust in their developers.
Static code analysis reveals a vulnerability in the source code of applications, for example, a function or operator may have mistakes, leaving a vulnerability at a certain stage.
With the help of dynamic code analysis, DevSecOps detects the vulnerability of the IaC infrastructure, because the code is written by people, and people can mistakes. Therefore, the desire to automate everything that can be automated also increases the level of information security.
Analysis of mobile applications allows a DevSecOps specialist to identify exactly their vulnerabilities.
In general, the task of DevSecOps is to prevent the project from slipping into a constant struggle with the consequences of wrong decisions and mistakes. But to prevent the appearance of various types of vulnerabilities from the very beginning, at each stage of development.
SRE (Site Reliability Engineering) - a course on reliability and improving the quality of software
Site Reliability Engineering is ensuring the reliability and uninterrupted operation of the site in conditions of continuous development and high load of its services.
Despite the fact that SRE is about working on errors, the Site Reliability engineer is not a simple technical support employee, because his task is not so much to "put out a fire" as to prevent it from occurring.
The SR engineer does not just troubleshoot but encourages feature developers to monitor the performance of their applications in production. His task in the DevOps team is to analyze the frequency of errors, identify their cause, and proactively create an automated method to eliminate them in the future.
The Site Reliability engineer also monitors the stability of the infrastructure. This often overlaps with DevOps work. Of course, he can eliminate sudden problems in his work, but the repetition of the incident that occurred or frequent service drops require a deep understanding of both the development process and the operation of scaling systems, load distribution, and understanding how a server or a cluster of servers copes with it.
After analyzing and identifying the reasons for the unreliable operation of the system, the SRE can correct the unsuccessful solution himself or ask the responsible team members to do so, indicating the method of eliminating its unstable operation.
The permanent work of the Site Reliability engineer in the project is:
- ensuring a quick response to the occurrence of unforeseen incidents;
- increasing the reliability and stability of all site services over time;
- automating and optimizing the operation of its services;
- increasing the stability of the site and its services under high loads.
AIOps and MLOps - the future has already come!
In order to make the right strategic decisions at the state, corporate or niche highly specialized level, often certain analytical information is required. That allows you to get a correct forecast for the future. For this, scientists develop mathematical models that collect and process statistical data, and based on the comparison of past and current, considering variable conditions, it is often possible to obtain a fairly accurate forecast and analysis of business plans, financial forecast, forecast of expenses of various resources, technical forecast of product life cycle indicators, etc.
These are the tasks that are solved with the help of AI/ML Ops tools and their development teams.
To solve these tasks, the best cloud solution today is Kubeflow, which is tightly integrated with the Kubernetes platform. And for the deployment of initial mathematical models, the developer has MiniKF, which can be installed on a separate PC or laptop in a few minutes. By the way, we previously mentioned Serverbee's experience with Kubeflow and MiniKF deployment here.
The use of machine learning systems every year finds its implementation in new fields, such as automation in airplanes and automobiles, smart home systems, bookmakers, and many other industries. The use of AI/ML Ops has not yet reached its peak of popularity. Mathematical models and Machine Learning increases the speed and quality of automation in production, research, and analytical and statistical activities in IT, business, in the banking sphere. The development of the capabilities of mathematical models and their complexity is only gaining momentum nowadays, and the demand for specialists in this field will grow in the future.