The Economics of Software in eMobility: Financial Strategies & Optimal ROI

Webinar 09.05

close button
DevelopmentConsumer Driven ContactBackend

Consumer-Driven Contract (3)

03 JANUARY 2020 • 15 MIN READ

Piotr Majcher

Piotr

Majcher

header picture

Introduction

In our previous articles - Consumer-Driven Contract (1) - how to increase stability in distributed systems. and Consumer-Driven Contract (2) - Pact-based implementation we described the base principles and implementation of the Consumer-Driven Contract without any automation.

In this article, we’re going to orchestrate CDC and make them part of the existing CI/CD pipelines. We will refer to the code from this post, so if you haven’t read it, some parts of this post may be unclear.

We’ve already chosen Pact as a tool that will help us introduce CDC. Naturally, we also decided to use the Pactflow platform that plays the role of a Pact Broker. This time, we will focus much more on the platform than on our code. We will prepare a simple pipeline using Gitlab CI/CD cooperating with a Pact Broker. Let’s dive into the details now!

Customer pipeline

This Picture describes the Customer CI/CD flow responsible for validating contracts.

CDC flow

Everything starts with a change in the Consumer codebase. It triggers the entire pipeline and, as a part of it, we’re going to check whether the Consumer assumptions of what can be delivered by the producer are valid. We described the process of preparing the contract and uploading it to the broker here, but just to recall it: we need two commands here. One for generating the contract:

./gradlew clean test --tests io.solidstudio.dev.cdc.consumer.SensorsContractTest

And the second one for publishing it:

./gradlew pactPublish

Before we move those two commands into the pipeline, let’s focus on the important part of publishing contract which is contract versioning.

Contract versioning

The contract version is the internal Pact value that is not exposed to the users. But each contract may be identified by the consumer version and provider version. So instead of thinking in terms of contract versioning, we should switch to versioning of consumer and provider apps.

According to Pact best practices - which are similar to the best practices of versioning in general - we should be able to refer from the app version to some point in SCM easily.

The easiest way to do it is by appending the commit hash to the version. In real life, we would need to prepare some release scripts for our projects, but for the sake of simplicity, we’re going to use the feature of the pact Gradle plugin that allows specifying the consumer version explicitly.

pact {
    publish {
        pactDirectory = "target/pacts"
        pactBrokerUrl = 'https://solidstudio.pact.dius.com.au'
        pactBrokerToken = 'token_value'
        providerVersion = System.getenv("CONSUMER_VERSION")
    }
}

As you can see, the parameter name is “providerVersion”. It’s a bit tricky because it's the consumer, not the provider, who publishes the contract. But maybe designers decided to name the parameter that way because the API consumer actually acts as a contract provider.

CONSUMER_VERSION will be populated by the GitLab pipeline with the current value during the pipeline run. We’re ready to prepare the first box from the diagram.

Sharing the contract

Here is the part of the GitLab pipeline responsible for generating and pushing the contract.

stages:
    - publish_contract
    - can_i_deploy

before_script:
    - export GRADLE_USER_HOME=`pwd`/.gradle

publish_contract:
    image:
        name: openjdk:11
    stage: publish_contract
    script:
        - ./gradlew test --tests io.solidstudio.dev.cdc.consumer.SensorsFacadeSensorContractTest
        - ./gradlew pactPublish 

There are two steps we need to take here:

  • running a tests suite that builds the contract
  • and invoking the pact gradle plugin for publishing it.

Triggering the producer to validate the contract

Pact Flow offers two main types of webhooks:

  • contract published
  • contract verified

A webhook has a form of an HTTP POST call. When a new contract is published, what we want to do is trigger the API Provider pipeline able to validate the contract. We’re going to prepare the Provider pipeline in the next section.

Let’s focus now on triggering the Provider pipeline. The Gitlab platform has an HTTP API allowing us to integrate the service with other tools. We can easily generate an API token to trigger project pipelines:

CDC GitLab Pipeline Trigger

We have the url now, so we can configure the Pack flow webhook:

Pactflow Webhook

Validating the contract

The code responsible for validating the contract has been presented in a previous post. Let’s just go back to the command used to execute the check.

./gradlew test --tests io.solidstudio.dev.cdc.producer.ProviderTest

As we can see, it looks like something we can move to the CI/CD pipeline easily.

# DinD service is required for Testcontainers
services:
    - docker:dind

variables:
    # Instruct Testcontainers to use the daemon of DinD.
    DOCKER_HOST: "tcp://docker:2375"

stages:
    - Verify contract

before_script:
    - export GRADLE_USER_HOME=`pwd`/.gradle

verify_contract:
    image: openjdk:11
    stage: Verify contract
    script:
        - ./gradlew test --tests io.solidstudio.dev.cdc.producer.ProviderTest

There some additional commands required by test containers used for testing, but functionally, it’s just invoking a single Gradle task. As we build the contract validation logic using the Pact library, the verification result is shared with the Pact Broker automatically. Now it’s time to push the customer’s pipeline waiting for information about the finished contract validation.

Continue customer’s pipelinet

Here’s where Pact Flow webhooks come into play again. But it’s going to be a bit more complicated this time. Gitlab’s API allows us to continue a stopped pipeline, but we need to know the id of the job we want to push. Any time a job runs, a new instance with a new id of the job is created. We can’t prepare a static URL to continue the consumer’s pipeline. Since you can run two pipelines at the same time, referring to a specific pipeline for running it further is impossible. How to determine which pipeline should be unlocked?

Fortunately, there are few features and tools that make it possible.

First, there are the dynamic variables that can be used to construct a Pact Flow webhook url or body. So our endpoint may contain, for instance, consumer’s name and version, producer’s name and version, and more different variables. We decided to use the commit’s short id as a part of the customer version. So the webhook url may contain a commit identifier. But that’s not enough to trigger a blocked pipeline job. We have to find the job id.

As we already mentioned, Gitlab exposes the HTTP API. A job is a first-class entity, so we can perform a useful operation on it. For instance, we can look for all project’s jobs with a specific scope. So we can have something like this:

/api/v4/projects/${projectId}/jobs?scope[]=manual

Note: projectId can be found in the Gitlab frontend.

consumer-driven contract ID

We’re interested in jobs with a manual scope because that’s how we implemented the job waiting to be triggered. A manual job is a job that can be triggered by the click or API call. As a response, we get a detailed description of jobs. I removed 90% of the response to focus on the parts that are most important for us:

[
    {
        "id": 389946907,
        "status": "manual",
        "stage": "can_i_deploy",
        "name": "can_i_deploy",
        "commit": {
            "id": "7d0a26d9a3324891eb1ecb0f470721851ba4e42d",
            "short_id": "7d0a26d9"
        }
    }
]

Good news: we have the commit short id and the job’s id! So in theory, we found a way to map the commit short id to the job id (or multiple ids). Now we have to trigger the the job. Here’s the Gitlab API method we can use:

/api/v4/projects/${projectId}/jobs/${jobId}/play

We have the theory ready. But where to put all this code? Pact Flow allows entering a string as a webhook url. It sounds like a good candidate for AWS Lambda. And that’s how we implemented it. Lambda exposes the endpoint accepting the commit short id (acting as consumer version) and creates a really simple algorithm:

  • Get all jobs from a specified project with scope "manual,"
  • Find jobs with the commit short id matching requested one,
  • Trigger jobs based on job ids from step 2.

Lambda is exposed to the world using the AWS API Gateway and can be called by the Pact Flow webhook. Here is the most important part of the Lambda querying jobs and filtering the commit short id:

 const getJobs = (contractVersion, projectId) => {
    return new Promise((resolve, reject) => {
      const options = {
        hostname: 'gitlab.com',
        port: 443,
        path: `/api/v4/projects/${projectId}/jobs?scope[]=manual`,
        method: 'GET',
        headers: {
          'Content-Type': 'application/json',
          'PRIVATE-TOKEN': 'api token',
        },
      };
      const req = https.request(options, res => {
        res.setEncoding('utf8');
        let body = '';
        res.on('data', chunk => {
          body += chunk;
        });
        res.on('end', () => {
          const jobs = JSON.parse(body);
          const jobsToTrigger = [];
          jobs.forEach(job => {
            const commitShortId = job.commit.short_id;
            if (contractVersion === commitShortId) {
              jobsToTrigger.push(job.id);
            }
          });
          resolve(jobsToTrigger);
        });
      });
      req.on('error', e => {
        reject(e.message);
      });
      req.end();
    });
  };

We’re not going to discuss the topic of security at the moment but, of course, such an endpoint needs to be secured somehow.

The code above is simple, but someone may say that it’s not the best option to maintain such a custom code. And actually, who should maintain it? But if you think about how many different CI/CD platforms companies use, there are some concepts in one platform that don’t exist in others, or have different definitions.

A successful introduction of CDC requires work on both sides, consumer and producer. In my opinion, this specific code should be maintained by the consumer. The consumer should know how to trigger its pipeline.

There’s another very important reason to keep this code on the consumer side which is security. Giving external company access to the API of our Gitlab may not be an option. Exposing one very specific method sounds much safer.

Can I deploy?

The webhook is triggered when contract is verified, no matter the result of the verification. The HTTP POST call to the registered url will be performed even if the contract doesn’t pass verification. It’s the customer’s responsibility to decide whether the new version should be deployed or not. Pact prepares a cli tool that ensures we can deploy some version of our code to the server safely. The tool has a self-descriptive name “can-I-deploy”. We have to specify a few parameters and, as a result, we get a go/no go decision. The invocation in our case looks as follows:

- pact-broker can-i-deploy  --pacticipant sensor_management --version $CI_COMMIT_SHORT_SHA --broker-base-url 'https://solidstudio.pact.dius.com.au' --broker-token token_here

We can install cli manually, but Pact also delivers a Docker image containing all required dependencies. That makes wrapping this invocation in a pipeline job really straightforward:

can_i_deploy:
    image:
        name: pactfoundation/pact-cli:latest
    stage: can I deploy
    dependencies:
        - publish_contract
    script:
        - pact-broker can-i-deploy  --pacticipant sensor_management --version $CI_COMMIT_SHORT_SHA --broker-base-url 'https://solidstudio.pact.dius.com.au' --broker-token 'WJHWnFW5b00S8paAi01RYw'
    when: manual
    allow_failure: false

This crowns the introduction of Customer-Driven Contracts into the Consumer’s CI/CD pipeline. Here’s the full Consumer’s pipeline diagram:

consumer-driven contact pipeline diagram

Summary

Customer-Driven Contracts/Testing is a pattern that is relatively expensive to set up. Once set up, it also requires cooperation between different teams, systems, and tools - which can be time-consuming. But in the end, we can have much more confidence when applying any changes to our API.