Our understanding of DevOps is to bring together all aspects of software development (Dev) and operations (Ops) to a conherent, united system. This does not only entail technical aspects but also and foremost oragnizational and procedural topics. This includes aligning the goals of Dev and Ops and not define them independently in order to avoid target conflicts like "maximum of new features developed" (Dev) vs. "maximum stability of the system" (Ops). Typically, Dev and Ops form joint DevOps teams which cover the entire project cycle together; long and painful feedback loops on handover phases can be avaoided. In addition, DevOps teams aim to automate as many operational and delivery-related processes and store them in a version control system, just like the source code itself (infrastructure-as-code). This allows the teams to make these processes reproducable and idempotent and create transparency over the used software version and configuration. As an additional result, all changes to the system are tracked and can be reviewed.
Ideally, larger organizations should split into autonomous teams which mimic the architecture of the software. The architecture should be modular and allow for loose coupling of the individual modules so that teams can work independently. Each module or service (in the case of a microservice architecture, see below) is being developed and operated by exactly one team.
After mentioning a bunch of different and - at first sight - independent topics, we want to dive into the specifics and connect the dots.
Agile software development has become the standard development methodology in many companies across all industries and sizes. It has replaced the traditional "waterfall" methods. However, as a result of this, new requirements towards the management and organization of software delivery arose that need to keep up with the increased pace of releases and deliveries. While in the traditional development delivering a new version may have happened 2-4 times a year, we may now be looking at bi-weekly (as in many forms of the popular Scrum framework) or even continuous deliveries (meaning that every single change to the system can be rolled out individually). The resulting increase in effort and risk of (manual) software delivery needs to be mitigated. This marked the beginning of the DevOps movement and its goal to perform software deliveries as smooth and risk free as possible.
In DevOps, all project member in software development and operations are joined into one organization. This group takes care of the entire life cycle of the application. Through this takeover of all the aspects of development, maintenance, operations, scaling and stability of the software, all member are required to consider the later stages of the lifecycle (operations & maintenance) also during the earlier stages (planning, design, technical decisions). Members cannot just leave these aspects to somebody else. Consequently, there is a higher level of identification with the project and its goals as well as a higher quality standard towards the own work. Goals are not formulated per team or team member, but for the entire organization working on the same project or product. This drastically reduces the risk of having conflicting interest.
The reorganization into DevOps teams often is the most difficult and painful phase. The traditional disciplinary structures and procedures need be replaced while keeping up the status quo of the running applications and proceeding with their enhancement. Therefore, it is indispendable for both the management and the project members to fully align on the new structure and agree to these sometimes drastic changes. To be entirely honest, such a transition does not only produce winners in the organization. However, it is eventually necessary in order to achieve true agility throughout the software development and operations organizations.
The automation of as many of the recurring processes as possible is a key element in achieving the desired agility. On one hand, the mere recurring efforts need to be reduced to a minimum. On the other, the risk of making mistakes during the execution of the respective processes is minimized. According to the mantra "If it hurts, do it more often", complex and error-prone processes get automated more and more so they can be applied to more applications or services. This turns the risky delivery processes into routine and errors can be detected, rolled back and eliminated earlier.
Continuous Delivery describes a degree of automation in software delivery that allows for updating and deploying new versions of the software without significant manual interaction to the productive systems. This is achieved by the usage of entirely reproducable processes which are expressed as code (e.g. scripts) and can be executed on the target environments as many times as needed with equal results. The configuration of all components and environments is kept in a central place and also altered from this central entity, not on the target machines. This allows for full control over the state of the target systems afterwards.
A large part herein is taken by automating software tests. While for a long time unit testing was the medium of choice, all changes to the software are being tested for integration, i.e. that they work functionally and without error with other dependencies. These dependencies could be remote systems or other components or services, whether they are the real systems or mock services.
Not only the testing and deployment of the software ifself is being automated within the DevOps framwork. Ideally, this also applies to the setup and maintenance of the undelying infrastructure. This is specifically important in combination with the hosting of the software in the cloud, but it can also be achieved when running on-premise installations in your own data center. In the latter case the respective methods and tools for the provision of servers and networks (infrastructure automation) or higher-value layers like databases, virtual machines or containers need to be applied. The resulting instructions and configuration are also centrally stored and managed and treated in the same way as the code of the software itself (infrastructure-as-code).
Microservices have established themselves as the go-to architectural pattern for complex software systems over the past years, although there is no fixed definition for term "microservice". In essence, microservices are autonomously deployed and operated services which interact through well-defined interfaces with each others, but don't share the same memory or process and are therefore independent. The advantage is a high degree of flexibility regarding the scalability - each service can be scaled independently - and technology - each service can be based on its own technology stack. The drawbacks are mostly in the management and orchestration of many individual services as well as the overhead for the operations of the infrastructure and the requirement that any communication between services needs be performed over network, not locally. According to that, each service is being developed, maintained and run by a single team. As a consequence, teams need to agree on only the interfaces and can work independently in the meantime.
The rapid adoption of DevOps techniques and methodologies has led to a vast number of technologies and tools, of which most are open source and of a suprisingly high level of maturity. Although each use case also offers multiple alternatives, the following set of tools has evolved to be stable and reusbale throughout a number of projects. According to a specific project, we're also happy to discuss those alternatives.
We work with you on the foundation of DevOps. This is entirely technology-independent and emphasizes the fundamentals of DevOps. Together we develop the strategy for the introduction or deepening of your DevOps structures. We discuss and implement best practices from the industry and past projects and discover the ideal target model as well as cour roadmap to get there.
During the transition we support you with our expertise regarding technological choices and the first steps with the new tools to achieve quick wins. These foster motivation and reduce ongoing effort. Further on we supervise the setup of the new strcutures and the adoption of the process automation. In addition, we help out hands-on in case things get too tight. Towards the end, we organize the ideal knowledge transfer through training "on the job".
Your architecture is too rigid to support real modularity? We support you in defining and adopting your path towards microservices, help you with architectural design or the implementation of individual services or automation of any recurring tasks.
Are you looking for a reliable partner to support you with your IT projects?
Just get in touch and let us talk about your project.