Opinions on Software Delivery & Engineering Metrics

Security and Availability makes the headlines. Software Delivery makes the deadlines.

Work that improves security and availability trumps work that could otherwise improve the software delivery experience. Investing Software Delivery means that an organisation can quickly respond to security events, build quality into the software as it’s developed, and can deploy and release small incremental changes that can be rolled back in seconds in case of an outage. Invest in Software Delivery capabilities in order to gain security and reliability. Build towards Zero-cost Deployments.

Continuous Delivery must be the end state for all engineering systems. There is no excuse why greenfields projects cannot adopt Continuous Delivery on day one. Migrating established projects with existing automation towards Continuous Delivery can be difficult. Investment should be put into migrating the system to Continuous Delivery, not making the existing delivery setup easier to operate in.

The set of metrics we use to measure software delivery, reliability, or security typically differs from the set of metrics teams need to improve in those areas. This is best exemplified with SLOs. An organisation creates a metric that illustrates whether systems are meeting their SLOs. That metric doesn’t help engineers improve the reliability of those systems. This is fine, just be aware of it. Likewise be wary of the inverse – when an organisation hoists a low level engineering metric relevant to specific platforms onto every system. This results in many systems having null metrics eg. services not running on that platform cannot report any metrics.

A good metric is one that we can talk about in natural language. The technical definition of Delivery Lead Time (DLT) is the time taken from when a commit was authored until it was successfully deployed to production. That’s difficult to parse. Instead, we can talk about the median DLT as: the time it takes, on average, from when an engineer writes code until it’s used by customers. This phrasing makes the value of DLT clear and leads to compelling conversations.

The key high level metrics we need are Delivery Lead Time, Deployment Frequency (DF), Time Since Last Deployment (TSLD), and Service Level Objectives (SLOs) for Reliability. Change Failure Rate and Recovery Time don’t tell us what we think we want to know. Read more in: DORA Metrics Reference.

Metrics must measure systems, not teams or individuals. Singling out individuals, even only for praise, sets the wrong incentives. Reporting metrics against teams misses the point. Metrics on systems illustrates the realities of the median engineer developing in it. Feedback and investment flows directly to the system, not by way of proxy as when metrics are presented against teams.

There is no objective “good” for metrics such as Delivery Lead Time and Deployment Frequency. Good enough targets for delivery metrics all depend on the type of system (API, frontend, mobile, database, …), age of the system, frequency of activity in the codebase, business value it brings to the organisation, and expected lifespan of the code.

System owners should set their own targets for DLT and DF. Like SLOs, there are no magical numbers to aim for. Set aside the impractical numbers (eg. 100% availability, 5 minute DLT), and consider what is good enough for the system.

Software Delivery metrics can act as Service Level Indicators for engineering systems. DLT is a wide measure across many discrete activities such as code review, build time, testing time, and deployment time. It also measures time in the delivery process introduced by team or organisational overhead. Seeing DLF and DF drift from the baseline indicates that it’s worth investigating the delivery process.



Related Posts