The DevOps Lifecycle, Tools and Toolchain

The DevOps Lifecycle

The concept of joining development and operations people together support the goals of DevOps:

Improve collaboration between all stakeholders from planning through delivery and automation of the delivery process in order to:

  • Improve deployment frequency
  • Achieve faster time to market
  • Lower failure rate of new releases
  • Shorten lead time between fixes
  • Improve mean time to recovery

This is accomplished by implementing and adhering to a straightforward lifecycle of activities:

  • Continuous Development
  • Continuous Integration
  • Continuous Testing
  • Continuous Monitoring
  • Virtualization and Containerization

Clearly DevOps fosters a culture of continuous everything! The result is an iterative cycling creating constant improvement with far shorter update cycles that accelerate time to market while making it far easier to identify and correct deficiencies should they occur. That iterative cycle repeats constantly:

  • Be Agile
  • Plan
  • Build
  • Test
  • Release
  • Configure
  • Monitor

From the monitoring, we learn what we need to know to return us to being agile and beginning the next plan. Jez Humble, one of the founding developers of DevOps and author of The DevOps Handbook, points out that “DevOps is not a goal, but a never-ending process of continual improvement.”

A Lifecycle of Continual Improvement

Most of us are accustomed to periodic updates and upgrades. New features, new capacities, new security provisions are introduced that make our software experience better. Some of these are ad hoc, distributed when a need, such as a security weakness, is identified and must be addressed immediately.

Most updates are distributed weekly or monthly. Upgrades perhaps a bit less frequently. Major new version introductions may take years.

These are true for software applications that have been developed in the classic, monolithic way. Any change must be planned, designed, coded, prototyped, piloted, tested and re-tested before distribution.

In cloud environments, monolithic application development no longer makes sense and has been superseded by microservices delivered in containers along with all libraries and other resources required to run them. These microservices are called upon as required. Should one become damaged in the course of transit or processing, it is instantly re-instantiated and re-distributed. This delivers a level of resilience unavailable in monolithic development.

Similarly, today’s DevOps Lifecycle re-emphasizes the concept of “cycles.” Here, the cycles come far faster and more frequently. In DevOps, the developers and operators that have joined forces develop a culture in which they are constantly finding new, better, faster ways to work. Leveraging their foundation in Agile development methodologies, they build new code in collaborative environments rather than silos, and they focus on “short yardage,” creating repetitive rounds of incremental improvement.

As soon as the developers have distributed new code the operators step in and start obtaining user feedback. This vital information is fed back immediately to the developers who then assess the input, create code to build new solutions. Since each is a small component of a much larger environment they can be tested, packaged into their containers, released, configured, and distributed at which point the operators step in again and start obtaining the next cycle of user feedback, starting the next cycle. This repetition continues, and continues, in a lifecycle of constant, continual improvement.

Iterative Lifecycle Process

Even though they are now working together collaboratively, it is useful to examine the steps both “Dev” and “Ops” execute in the course of the DevOps Lifecycle. Let’s enter right in the middle.

Ops Task Cycle

Monitor-Audit-Diagnose-Tune-Feedback

A major responsibility that falls to Ops is the constant monitoring of applications to assure optimum performance. This cannot end at simply monitoring. Going deeper, Ops must regularly audit the system surrounding each application to assure that everyone is doing what they are supposed to be doing in a timely and efficient manner. As they expose weaknesses, they must diagnose the underlying root causes. If possible, they will tune any configuration parameters that are available to them in order to resolve the anomaly. If they are not able to do so, the report of this anomaly becomes part of their regular feedback to Dev.

Dev Task Cycle

Analyze-Edit-Build-Test-Debug-Deploy

As new feedback comes in, Dev analyzes it to determine root causes and identify opportunities for improvement. They may then edit existing code or build new microservices to resolve reported issues. Once corrected or conceived, the changes and additions must be tested and then debugged as necessary. Finally they deploy the new microservices at which point Ops begins monitoring them.

And the next iteration of the lifecycle continues. Again, and again, and again. Continuous development. Continuous Integration. Continuous Improvement

DevOps Tools

Think back to the introduction of any major new technology innovation and you’ll recall an explosion of tools and utilities surrounding that product or process. Eventually the strong survive, consuming some in the middle, and everyone else moves on.

DevOps is no exception. To span both development and operations several tools will obviously be required. The challenge as always is to choose the right ones to invest in the first time so you can preserve your investments as you grow. It is as difficult a challenge with DevOps as it has ever been with any emerging methodology.

Tool Categories

An excellent way to begin to approach the selection challenge is to break the big problem down into smaller component pieces. In this case breaking the many tools now on the market into categories to create smaller groups of choices to consider.

Many of these categories correspond closely to the stages of the lifecycle discussed earlier. Others are environmental tools meant to support a more conducive way to weave development and operations together:

Collaboration

The most important connections are those that are made between members of the team. Communication between team members must be as rapid and effortless as possible. Many DevOps collaboration tools make it as simple as a chat to start. When the situation warrants, the chat can be easily replaced with voice, and video, and screen sharing, calendaring, white boarding, and many other convenient features. Perhaps the most valuable is the ability to have several team members simultaneously edit documents.

Code review

Just as no writer can properly review and proofread their own prose, developers are ill served to review their own code. People tend to see what they expect to see which makes it very likely that a developer will pass over the obvious logic errors they wrote. Review by a colleague avoids this problem. They will also determine whether all of the functional specifications and requirements have been fulfilled and that the code conforms to the guidelines established for the project. Accelerating the process will be automation tools to perform some of the standard tests. Code review tools also provide important version control.

Continuous integration and continuous deployment (CI/CD)

Welcome to the core purpose of DevOps, continuous improvement through continuous development, continuous integration, and continuous deployment leading to continuous feedback that starts the entire cycle again. By definition this is an iterative process that keeps adding value with each cycle. The flip side is that every cycle also adds complexity. CI/CD tools exist in many flavors that each help manage the growing complexity. Some automate software testing and deployment. Others focus on integration and delivery. Some feature plug-ins that will automate just about any task in a DevOps environment, or at least claim to.

Build automation

Development is a team effort with many contributors. Once a developer has created source code, it must be retrieved from the code repository and compiled into machine language by a build script. The result will be integrated into the shared environment where it will interact with code produced by other developers. Before that happens, it is best practice to ensure that the new binary code will not impact existing code negatively. Build automation tools perform this testing. It also supports linking of modules and processes, documentation, testing, compilation, packaging, compression, and distribution of binaries.

Testing automation

The core of DevOps is automation which accelerates the process of developing software, testing it, deploying it, and returning feedback quickly to move to the next iteration. The concomitant commitment must be to deliver “quality at speed.” Faster isn’t better if the quality is absent. Testing is often seen as a prime source of latency in the process, so test automation tools have been developed to automate as much of the code testing process as possible to reduce that latency.

Release automation

Also referred to as Application Release Automation (ARA), release automation describes the process of packaging and deploying an application or updates to an application. It spans the workflow from development through Quality Assurance (QA), integration testing, user acceptance testing (UAT) to get the software to production. This very effectively supports the DevOps goals of more frequent releases at greater quality.

Configuration management

Consistency is a key characteristic of a quality DevOps environment. Servers, storage, networking, code, operating systems, literally everything every step along the path must be kept as consistent as possible to optimize the speed of all workflows. At the same time, consistency enables scaleability. When every configuration is identical it becomes simple to replicate them rapidly. Any manual intervention immediately introduces latency. Configuration management tools are used to make changes and deployments faster, predictable, scalable, replicable, keeping all controlled assets in their desired end state.

Infrastructure and application monitoring and management

By uniting development and operations, we acknowledge that it’s not enough to iteratively develop and deploy software faster. We must also assure that the software runs efficiently and constantly. Application Performance Monitoring/Management (APM) constantly monitors all of the resources required for applications to run optimally, alerting operators or, preferably, taking automated resolution action themselves. Similarly, the underlying network on which everything runs must be similarly monitored and managed. Reports generated from both systems enable system owners to identify proactive steps to take that will improve operations.

Containerization

Containers make DevOps development and delivery easier, because they assure that the operating environment the code is running in doesn’t change between developers, between environments, or ever. Containers accomplish this by packaging everything the application needs, the code and all its dependencies, including runtime, system tools, system libraries and settings, inside the container, conveying ownership of the entire package that actually runs the software upon the developers. This makes it far easier for the developers to share the complete package with IT operations, the defining goal of DevOps.

Serverless Computing 

Prior to the introduction of “serverless” architecture, developers needed operations to provision and maintain servers to provide the runtime services they needed to build and test code. Serverless simply moves that to the cloud. Literally every task performed in a DevOps environment is accelerated by the constant availability of consistently configured resources from development to testing to deployment. This dramatically improves the role of Operations as it removes what is probably their most time-consuming, commoditized set of tasks. Thus, they are freed to engage in far more valuable activities to support development and the user community.

As we said at the outset, with the DevOps movement exploding, and with so many different disciplines being incorporated into it, the list of available tools is enormous and growing. Here is a list of those found listed on various “Top 10”, “Top 30” and other lists:

Ansible Ganglia Nagios
Artifactory Git New Relic APM
Bamboo Gradle OverOps
Behat Graylog PagerDuty
Capistrano Hudson Plutora
CFEngine Icinga ProductionMap
Chef Jenkins Prometheus
Code Climate Juju Puppet
Consul Kubernetes Raygun
Docker Loom Systems Rudder
ElkStack Monit SaltStack

 

DevOps Toolchain: Innovate Faster and Deliver Superior Software

We’ve introduced and described categories of tools rather than highlight specific tools, partly because there are so many of each category that we could never cover them all.

This should raise the question in your mind, “how are we going to select from them all?” and there are plenty of analysts attempting to answer that question. If each of these categories consisted of only ten choices, that would involve evaluating 100 products. There are far more choices in the market and new ones entering every day.

This is a great driving reason to select and establish a complete end-to-end toolchain of selections including each of these categories. Ideally, using each tool for each stage in the chain culminates in starting to use the next tool in the chain. DevOps is obviously not this linear, but progress forward is always a good model to base strategy upon. Of the many advantages to be gained from having a unified toolchain, the simplest to understand and most fundamental is serving the competitive need to innovate faster, delivering superior software faster and more frequently.

Improving exception handling and incident management is also key to maintaining high velocity in software delivery cycles. Ultimately, this will also serve to help you identify and resolve defects.

A DevOps practice doesn’t necessarily need tools, and is certainly not defined by them, but given that the goal is accelerated release of constantly improving software, anything that speeds up the processes inherent in DevOps is, by definition, a good thing.

Avoiding Latency

In DevOps, latency is our enemy. The idea is to accelerate the cycles, so anything that slows them down hinders us.

Using tools that don’t work well together introduces latency, usually in the form of workarounds devised by developers and operators who all have better things to do with their time than devise kludges.

This adds a whole new level of complexity to our tool selection problem. Not only do we need to evaluate each tool, comparing it to the other choices in their category, we also need to gauge the interoperability between them and tools in other categories.

Consistent with DevOps philosophy we also need to determine how acceptable each selection will be to all members of the development and operations teams, since the tools’ interoperability will help determine the teams’ interoperability and seamlessness.

So our challenge is to identify best-of-breed tools in each category, proving that they all work well with each other, and will all be fully acceptable to all involved parties. This will not be simple.

Alternative Strategies

One strategy that most every organization explores for just about every initiative is to mandate adoption. This almost never works well. Departments are not only invested financially in existing tool platforms, they are also pragmatically invested in training, policy and procedure development, and more. The one thing that results from most mandates is resistance.

A truly extraordinary team could work together to identify tools that will work well for all of them and with the other tools they choose, but this will likely consume a significant amount of time and resources.

Anarchy is a costly alternative.

Integrated Platforms

Most of us are accustomed to platform providers offering up an integrated suite of related services. It is certainly the rule in Enterprise Resource Planning (ERP) software and productivity suites. The disadvantage may be that some of the modules aren’t the best-in-class, so you might have a great word processor in your productivity suite, but the spreadsheet doesn’t beat the competition. The ERP order processing may be extraordinary, but the inventory management leaves much to be desired.

The big benefit of using an integrated platform comes in their high level of interoperability. Its easy to copy a spreadsheet and paste it into a document in the word processor. Order Entry and Inventory both inform General Ledger flawlessly.

Unfortunately, a completely integrated comprehensive platform that will serve as a fully interoperable DevOps toolchain has not yet been introduced. Some vendors may be considered “close” to it, but more time and development are needed as the definition of DevOps continues to evolve.

There Is Another….

There is another hope to look to.

These challenges are, in part, due to the short amount of time elapsed since DevOps emerged about ten years ago. There simply hasn’t been enough time for fully integrated toolchains to emerge.

On the other hand, most of these tools are being developed using Open Source software. Given that they are all developers, it may be that the Open Source Community is predisposed and most likely to converge, collaborate, and create an identifiable integrated toolchain for end-to-end DevOps management.

Watch for it!

Planning a DevOps Initiative? Let Us Help!

Struggling to achieve alignment and collaboration within your organization? Not sure how to begin? We can help. Tiempo has assisted many organizations with adopting DevOps, and completing successful initiatives with minimal disruptive. From strategy and planning to implementation, Tiempo has the resources you need to make your initiative a success.

Schedule a DevOps Consultation