How to Calculate DORA Metrics

DORA metrics, developed by Google Cloud’s DevOps Research and Assessment team, are a proven and effective way to measure and improve DevOps delivery performance. By tracking and optimizing these metrics, development and DevOps teams can identify bottlenecks, enhance processes, and ultimately deliver higher-quality software more quickly and reliably. Although these metrics are simple, they’ve become an industry […]
Ryan has been a software developer for over a decade and a writer for five years. He is passionate about clean code, functional programming, and, most of all, finding ways to keep software development fun.

How to Calculate DORA Metrics

DORA metrics

DORA metrics, developed by Google Cloud’s DevOps Research and Assessment team, are a proven and effective way to measure and improve DevOps delivery performance. By tracking and optimizing these metrics, development and DevOps teams can identify bottlenecks, enhance processes, and ultimately deliver higher-quality software more quickly and reliably.

Although these metrics are simple, they’ve become an industry standard because they provide actionable insight into software delivery performance. The four DORA metrics are as follows:

  • Lead time for changes
  • Deployment frequency
  • Failed deployment recovery time
  • Change failure rate

DORA metrics also have the benefit of not singling out individual DevOps team members. Software delivery issues are usually caused by processes, not people. DORA metrics are most useful at identifying process bottlenecks, which, if improved, enable people to do their best work. While DORA metrics alone don’t guarantee a good experience for team members, they are a strong indicator of thoughtful management focused on creating a healthy DevOps process that gets work into production quickly.

In this guide, you’ll learn what each metric is, why it matters, and how to calculate it manually using GitHub Actions in your GitHub repository without any external tools.

Calculating Your DORA Metrics

If you want to follow along with this guide, you’ll need a GitHub repository to add your DORA actions to. The actions will work best in an active repository with frequent commits and deployments to provide data for calculating metrics. However, you can also add the actions to an empty repository and then add an empty deployment so you can experiment with the DORA actions.

Start by cloning the repository you’ll use, and then create a new directory named .github/workflows in the root of the repository. As you create each action below, place it in a YAML file in the directory you just created with a meaningful name, such as calculate-lead-time.yml. The exact name you choose for each file does not matter as GitHub automatically processes all YAML files in a repository’s .github/workflows directory. For more information on how GitHub Actions work and how to set them up, refer to the GitHub Actions docs.

You will store the data for DORA metric calculation in CSV files saved to the repository, which avoids the need for an external data store. While there are many automated tools for calculating DORA metrics, learning how to calculate the metrics manually ensures you will fully understand your data if you adopt an automated solution.

In addition to the actions that store the raw data, you’ll create a final action that calculates cumulative DORA metrics for the past day, week, and month in a Markdown-formatted report you can view via the GitHub UI for your repository.

Let’s start by creating an action that calculates lead time.

Lead Time for Changes

Lead time for changes measures the time it takes for a Git commit to get into production. This metric helps you understand how quickly you deliver new features or fixes to your users.

To calculate it, you need timestamps for when commits are initially added to the system and when those commits are pushed into production. Here’s how you can use a GitHub action to calculate the lead time for changes:

name: Calculate Lead Time for Changes
on:
  deployment_status:
    types: [success]
jobs:
  lead-time:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Calculate lead time
        run: |
          DEPLOYMENT_SHA=${{ github.event.deployment.sha }}
          DEPLOYMENT_DATE=$(date -d "${{ github.event.deployment.created_at }}" +%s)
          git log --pretty=format:'%H,%ct' $DEPLOYMENT_SHA > commit_times.csv
          awk -F',' -v deploy_date=$DEPLOYMENT_DATE '{print deploy_date - $2}' commit_times.csv > lead_time_results.csv
      - name: Commit results
        run: |
          git add lead_time_results.csv
          git commit -m "Update lead time results"
          git push

The git log command retrieves the commit hashes and timestamps, which are then processed using awk to calculate the lead time by subtracting the commit timestamp from the deployment timestamp.

Deployment Frequency

Deployment is a measure of how frequently your projects are deployed to production. High deployment frequency generally indicates a team’s ability to deliver updates quickly and reliably.

To track deployment frequency, log each deployment’s timestamp. Here’s an example using a GitHub action:

name: Track Deployment Frequency
on:
  deployment:
    types: [created]
jobs:
  deployment-frequency:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Log deployment
        run: |
          echo "$(date +%s)" >> deployment_log.csv
      - name: Commit results
        run: |
          git add deployment_log.csv
          git commit -m "Log deployment"
          git push

Failed Deployment Recovery Time

Failed deployment recovery time measures how quickly service is fully restored after an outage or service degradation caused by a change released to production. Depending on the severity of the issue, it may require anything from a quick hotfix to a complete rollback to restore service.

This metric is crucial for understanding the resilience of your systems: the faster you recover from service disruptions caused by deploying changes to production, the less likely it is that users will be negatively impacted.

To log the time delta between a service disruption and restoration, you can use a GitHub action triggered by a repository_dispatch event:

name: Track Failed Deployment Recovery
on:
  repository_dispatch:
    types: [service-disruption, service-restoration]
jobs:
  time-to-restore:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Log disruption or restoration time
        run: |
          if [ "${{ github.event.action }}" == "service-disruption" ]; then
            echo "Disruption,$(date +%s)" >> restore_log.csv
          elif [ "${{ github.event.action }}" == "service-restoration" ]; then
            echo "Restoration,$(date +%s)" >> restore_log.csv
          fi
      - name: Commit results
        run: |
          git add restore_log.csv
          git commit -m "Log service disruption/restoration"
          git push

Note that GitHub has no way of automatically detecting when an application is experiencing a service disruption. This means you must trigger the event by using a monitoring tool to track your application’s status and create a repository dispatch event with a type of service-disruption or service-restoration via the GitHub REST API. Also consider how you will determine whether a service disruption is related to a failed deployment. If your monitoring tool is sophisticated, you can filter out most disruptions unrelated to deployment and only call the GitHub API for relevant events.

Change Failure Rate

Change failure rate calculates the percentage of your deployments that fail to deploy successfully, thereby helping you understand the stability of your deployment pipeline. Ideally, you should analyze and fix the root causes of deployment failures to ensure the failure rate trends downward over time.

To store the data for calculating change failure rate, log the total number of deployments and the number of failed deployments in a GitHub action:

name: Track Change Failure Rate
on:
  deployment_status:
    types: [failure, success]
jobs:
  change-failure-rate:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Log deployment status
        run: |
          if [ "${{ github.event.deployment_status.state }}" == "failure" ]; then
            echo "failure,$(date +%s)" >> deployment_status_log.csv
          else
            echo "success,$(date +%s)" >> deployment_status_log.csv
          fi
      - name: Commit results
        run: |
          git add deployment_status_log.csv
          git commit -m "Log deployment status"
          git push

Calculating Cumulative DORA Metrics

Now that you’ve created all the actions to store the data needed to calculate DORA metrics, let’s see how to create an action that uses this data to generate a DORA metrics report.

To calculate cumulative DORA metrics for the past day, week, and month, you can create an on-demand GitHub Action that processes the log files:

name: Calculate Daily DORA Metrics
on:
  workflow_dispatch:
  schedule:
    - cron: '0 0 * * *'
jobs:
  calculate-metrics:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: '3.x'
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install pandas
      - name: Calculate daily metrics
        shell: python
        run: |
          import pandas as pd
          from datetime import datetime, timedelta

          def read_csv(filename):
              return pd.read_csv(filename, header=None, names=['timestamp', 'value'])

          def calculate_daily_metrics(df, date):
              start_of_day = date.replace(hour=0, minute=0, second=0, microsecond=0)
              end_of_day = start_of_day + timedelta(days=1)
              df['date'] = pd.to_datetime(df['timestamp'], unit='s')
              return len(df[(df['date'] >= start_of_day) & (df['date'] < end_of_day)])

          def calculate_daily_failure_rate(deployments_df, failures_df, date):
              deployments = calculate_daily_metrics(deployments_df, date)
              failures = calculate_daily_metrics(failures_df[failures_df['value'] == 'failure'], date)
              return (failures / deployments) * 100 if deployments > 0 else 0

          def calculate_daily_lead_time(df, date):
              start_of_day = date.replace(hour=0, minute=0, second=0, microsecond=0)
              end_of_day = start_of_day + timedelta(days=1)
              df['date'] = pd.to_datetime(df['timestamp'], unit='s')
              filtered_df = df[(df['date'] >= start_of_day) & (df['date'] < end_of_day)]
              return filtered_df['value'].mean() if len(filtered_df) > 0 else 0

          def calculate_daily_restore_time(df, date):
              start_of_day = date.replace(hour=0, minute=0, second=0, microsecond=0)
              end_of_day = start_of_day + timedelta(days=1)
              df['date'] = pd.to_datetime(df['timestamp'], unit='s')
              filtered_df = df[(df['date'] >= start_of_day) & (df['date'] < end_of_day)]
              disruptions = filtered_df[filtered_df['value'] == 'Disruption,']
              restorations = filtered_df[filtered_df['value'] == 'Restoration,']
              total_restore_time = 0
              for _, disruption in disruptions.iterrows():
                  restoration = restorations[restorations['timestamp'] > disruption['timestamp']].iloc[0]
                  total_restore_time += restoration['timestamp'] - disruption['timestamp']
              return total_restore_time / len(disruptions) if len(disruptions) > 0 else 0

          def generate_mermaid_chart(title, dates, values):
              chart = f"```mermaid\nxychart-beta\n    title \"{title}\"\n"
              chart += f"    x-axis [{', '.join(date.strftime('%d-%m') for date in dates)}]\n"
              max_value = max(values)
              chart += f"    y-axis \"{title}\" 0 --> {max_value * 1.1:.2f}\n"
              chart += f"    bar [{', '.join(f'{value:.2f}' for value in values)}]\n"
              chart += "```\n\n"
              return chart

          now = datetime.now()
          dates = [now - timedelta(days=i) for i in range(30, 0, -1)]

          deployment_log = read_csv('deployment_log.csv')
          deployment_status_log = read_csv('deployment_status_log.csv')
          lead_time_results = read_csv('lead_time_results.csv')
          restore_log = read_csv('restore_log.csv')

          metrics = {
              'Deployments': [calculate_daily_metrics(deployment_log, date) for date in dates],
              'Failure Rate (%)': [calculate_daily_failure_rate(deployment_log, deployment_status_log, date) for date in dates],
              'Lead Time (hours)': [calculate_daily_lead_time(lead_time_results, date) for date in dates],
              'Restore Time (hours)': [calculate_daily_restore_time(restore_log, date) for date in dates]
          }

          with open('daily_metrics.md', 'w') as f:
              f.write("# Daily DORA Metrics (Past 30 Days)\n\n")
              for metric, values in metrics.items():
                  f.write(f"## {metric}\n\n")
                  f.write(generate_mermaid_chart(metric, dates, values))

      - name: Commit results
        run: |
          git add daily_metrics.md
          git commit -m "Update daily DORA metrics"
          git push

This script processes the CSV log files, calculates the DORA metrics for each of the past thirty days, and outputs the results as Mermaid charts embedded in Markdown. It will run automatically once a day at midnight UTC, and it can also be run manually via the GitHub UI.

Once you’ve added all the actions, you can push them to your repository so GitHub can process them. Every deployment from the repository will then update the DORA data, making it available when generating the cumulative report.

If you use these actions in a busy production repo, consider adding an action that occasionally rotates the CSV data files to prevent the accumulation of old, unneeded data.

Interpreting and Optimizing Your DORA Metrics

Now that you can calculate your DORA metrics, what should you do with the data? Unfortunately, there’s no straightforward answer because it depends heavily on the kind of software your team ships and the type of organization you work in.

Generally, you want to aim for high deployment frequency (eg multiple deployments per day), low lead time for changes (eg less than one day), quick recovery from failed deployments (eg less than one hour), and low change failure rate (eg less than 5 percent). But the exact targets depend on your team’s context. For example, if you currently only deploy once a month, you can start aiming for once a week as a starting point.

So while DORA metrics tell you what is happening, they don’t tell you what to do about it. Even if you identify bottlenecks that slow down your deployment process, it’s not always easy to solve them.

That’s where a developer collaboration tool like Aviator can help. Slow reviews and merges are a major cause of slow deployments, and slow deployments negatively impact all four DORA metrics. Features like FlexReviewMergeQueueStacked PRs, and Releases help improve your metrics and make your developers happier.

Conclusion

Regularly reviewing your team’s DORA metrics helps you stay focused on shipping quickly and optimizing your software delivery performance.

Improvement takes time, so calculating DORA metrics is an ongoing task. You need to continually monitor your metrics to identify trends, spot areas for improvement, and measure the impact of any changes you make to development processes. DORA metrics won’t take your team from subpar to world-class overnight, but when used correctly, they will help you steadily improve over time—and Aviator can help you get there more quickly.

Aviator.co | Blog

Subscribe

Be the first to know once we publish a new blog post

Join our Discord

Learn best engineering practices from modern teams

Get a free 30 minute consultation with the Aviator team to learn best practices to improve developer experience across your organization. No strings attached!