Building A CD Pipeline With Drone CI And Kubernetes

DevOps, Kubernetes, CI/CD, Drone, Pipeline
Building A CD Pipeline With Drone CI And Kubernetes
DevOps, Kubernetes, CI/CD, Drone, Pipeline

 

In a previous article, we discussed what CI/CD is, the kind of problems it tries to solve, and where it stands in the DevOps paradigm. We also built a small lab where we used Jenkins, Ansible, and Kubernetes to build a Continuous Delivery (CD) pipeline. In this article, we will be building a similar CD pipeline, but this time we are using Drone CI.

What Is Drone CI?

Written in Go and first released in 2014, Drone CI is a build tool that uses the container-first approach. The most prominent feature of Drone is that it uses containers for everything. Every stage in the most complex pipelines in Drone is performed through a Docker container. This offers a great deal of flexibility when it comes to using several tools and/or environments for your build and deployment needs. Unlike Jenkins, Drone CI must integrate with a Git repository to function correctly. It has a number of plugins that can be deployed as Docker containers. They can be used the same way UNIX tools (ls, cat, tee, etc.) are used.

Drone CI uses YAML files to get its instructions. The instructions file is checked in the repository with the rest of the application code. This behavior is quite similar to the declarative pipelines in Jenkins when they are configured to use a Jenkinsfile that is part of the repository. Nevertheless, Jenkins uses Groovy for the DSL language.

Our Sample Application: A Hello-World API Written In Go

In this lab, we streamline the process of building, testing, and delivering a simple API written in Go. The API should display “hello world” when the root URL receives a GET request. Let’s start with the application code (main.go):

package main

import (
   "log"
   "net/http"
)

type Server struct{}

func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
   w.WriteHeader(http.StatusOK)
   w.Header().Set("Content-Type", "application/json")
   w.Write([]byte(`{"message": "hello world"}`))
}

func main() {
   s := &Server{}
   http.Handle("/", s)
   log.Fatal(http.ListenAndServe(":8080", nil))
}

We also need to run some tests on the code. Go makes this very easy, we only need to add a file containing our test functions and call it main_test.go:

package main

import (
   "io/ioutil"
   "net/http"
   "net/http/httptest"
   "testing"
)

func TestServeHTTP(t *testing.T) {
   handler := &Server{}
   server := httptest.NewServer(handler)
   defer server.Close()

   resp, err := http.Get(server.URL)
   if err != nil {
       t.Fatal(err)
   }
   if resp.StatusCode != 200 {
       t.Fatalf("Received non-200 response: %d\n", resp.StatusCode)
   }
   expected := `{"message": "hello world"}`
   actual, err := ioutil.ReadAll(resp.Body)
   if err != nil {
       t.Fatal(err)
   }
   if expected != string(actual) {
       t.Errorf("Expected the message '%s' but got '%s'\n", expected,actual)
   }
}

For building an image, we need a Dockerfile. It may look like this:

FROM golang:alpine AS build-env
RUN mkdir /go/src/app && apk update && apk add git
ADD main.go /go/src/app/
WORKDIR /go/src/app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o app .

FROM scratch
WORKDIR /app
COPY --from=build-env /go/src/app/app .
ENTRYPOINT [ "./app" ]

 

Installing And Configuring Drone CI On Kubernetes

Installing Helm

The easiest and fastest way to install Drone CI is to use a Helm Chart. If you don’t know what Helm is, it’s simply a package manager for Kubernetes. Whenever you have an application that needs different Kubernetes resources like Deployments, Services, configMaps, Secrets, etc. you can package it in a Helm Chart that deploys (or removes) everything through a single command. Helm itself is made up of two components: the client tool (helm) which is a binary that runs on your laptop, and a server part (tiller) that runs on the server.

Provided that you already have administrative access to a running Kubernetes cluster, you can install Helm by following these simple steps:

  • Install the client
# cd /tmp
# curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
# chmod u+x install-helm.sh
# ./install-helm.sh

  • Once the client is installed, we need to install the server part:
# kubectl -n kube-system create serviceaccount tiller
# kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
# helm init --service-account tiller

The client is just a binary that gets saved on your own laptop. For Tiller to be deployed, we need to create a Service Account for it. We called it tiller. Then we create a ClusterRoleBinding to add the service account to cluster-admin, which is a built-in role that ships with Kubernetes. As the name suggests, this role grants its users administrative privileges to the cluster.

Deploying Drone CI

Before deploying Drone, we must create an OAuth application on our repository. In this lab, we are using GitHub, but the same concept applies to other major repository services like BitBucket, GitLab among others.

For GitHub, go to https://github.com/settings/developers and create a new application:

 

Drone CI

Now, you may notice that we didn’t specify a valid URL for the homepage or for the callback URL. We’re doing this because we don’t have the correct URLs yet. Once we create the Service on Kubernetes, we can go back and change the URLs accordingly. For now, we just need the token:

 

drone 1

 

We need Kubernetes to have access to this Client Secret so that Drone CI can use it when necessary. Let’s create a Secret to hold our token:

kubectl create secret generic drone-server-secrets --namespace=default --from-literal=clientSecret="80b2fd9ae648384df5fb87955a5ed34962bf3c49"

 

Helm uses a file called values.yaml where you can define and override differently. Once we have Helm and Tiller, it’s relatively easy to deploy Drone using its Helm Chart:

helm install --name drone-release stable/drone

You may notice an error message in the output sending a warning that the setup is incomplete because we didn’t specify a version control system. Something like the following:

NOTES:
##############################################################################
####        ERROR: You did not set a valid version control provider       ####
##############################################################################

This means that we still need to provide important information for our deployment to function correctly. Let’s do that next:

helm upgrade drone --reuse-values --set 'service.type=NodePort' --set 'service.nodePort=32000' --set 'sourceControl.provider=github' --set 'sourceControl.github.clientID=0cb6e52e26befec35a3e' --set 'sourceControl.secret=drone-server-secrets'  stable/drone

Helm accepts custom variables that are used in the deployment in either of two ways:

  • Using a values.yaml file.
  • By setting the values directly in the command line using the --set option.

Both methods are commonly used in combination, with the values.yaml file containing the default values and the --set option overriding it.

In our example, we are setting the number of important values for our Helm deployment:

  • service.type is set to NodePort. We can set it to LoadBalancer or ClusterIP according to our preference.
  • service.nodePort: since we’re using the NodePort Service type, we’re setting the NodePort so that we can enable it on the firewall.
  • The provider is github.
  • The last two options set the client ID, and the client secret, for Drone to be able to communicate with GitHub.

If you need to set a different value or explore other options, you can refer to the Helm Chart documentation page: https://github.com/helm/charts/tree/master/stable/drone

Now, we need to get the IP address of our Drone server to update the deployment, and also update our GitHub application settings:

$ kubectl get nodes -o wide
NAME                                                STATUS   ROLES    AGE     VERSION          INTERNAL-IP   EXTERNAL-IP      OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-standard-cluster-1-default-pool-e317cef6-79bh   Ready       4h32m   v1.15.4-gke.18   10.128.0.60   35.239.209.198   Container-Optimized OS from Google   4.19.76+         docker://19.3.1

Having obtained the URL, we can update the deployment as follows:

helm upgrade drone-release --reuse-values --set 'server.host=http://35.239.209.198:32000' stable/drone

Next, update the GitHub application:

Now navigate to http://35.239.209.198:32000. You may need to authorize the application in GitHub. Next, you should see a page containing all the repositories in your account:

drone 2

You need to activate the repository by clicking on the Activate link next to its name.

Building The Pipeline

Drone uses a YAML file called .drone.yml which is checked into the repository with the rest of the code files. You can optionally change the name and the path of this file through the repository settings in the Drone UI. The .drone file contains a pipeline that may have one or more deployment steps. For a step to be reached, the previous step must be executed successfully. Let’s have a look at how our .drone file may look:

kind: pipeline
name: default
steps:
- name: test
 image: golang:1.10-alpine
 commands:
   - "go test"

The file starts with the definition and the name of the pipeline, default. Then we write our first step: testing. We use a golang image that contains the go binary. Once the image is invoked, the workspace directory is automatically mounted to the container. Next, we run go test, which will run any test functions in the main_test.go file. If any of the tests fail, the pipeline fails and the output explains which test case failed.

- name: build
 image: golang:1.10-alpine
 commands:
   - "go build -o ./myapp"

The next step is building the image. While we’re not going to make any use of the binary that gets created, this step is important since it verifies that the code can be successfully built without problems.

- name: publish
 image: plugins/docker
 settings:
   repo: magalixcorp/k8scicd
   tags: [ "${DRONE_COMMIT_SHA:0:7}","latest" ]
   username:
     from_secret: docker_username
   password:
     from_secret: docker_password

The publishing step is where the code actually gets baked in a Docker image and pushed to the registry so that it can be used later in any environment. Let’s have a look at the interesting parts of this step:

  • We’re using the Docker plugin of Drone. This plugin allows us to build Docker images from the code using the supplied Dockerfile.
  • The plugin needs the image name (repo), the tags, and also the credentials of the registry. In our example, we’re using Docker Hub.
  • Notice that we shouldn’t add the username and password directly in the file for security reasons. Instead, we use the from_secret directive which enables us to retrieve values from Drone’s Secrets.
  • We need to add the Docker username and password to Drone. This can be done through the settings page of the repository as shown in the screenshot:
drone 3

Now comes the last part of the pipeline which is responsible for actually deploying the image to Kubernetes:

- name: deliver
 image: sinlead/drone-kubectl
 settings:
   kubernetes_server:
     from_secret: k8s_server
   kubernetes_cert:
     from_secret: k8s_cert
   kubernetes_token:
     from_secret: k8s_token
 commands:
   - kubectl apply -f deployment.yml

 

In this step, we’re using an image that contains kubectl. Please note that this image was designed to work with Drone. You can examine the tweaks that the author made by looking at the project’s repository: https://github.com/sinlead/drone-kubectl

Notice that the step includes invoking a file called deployment.yml. This is the file that contains all the resources that we need. It looks as follows:

apiVersion: v1
kind: Service
metadata:
 name: hello-svc
spec:
 selector:
   role: app
 ports:
   - protocol: TCP
     port: 80
     targetPort: 8080
     nodePort: 32001
 type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: hello-deployment
 labels:
   role: app
spec:
 replicas: 2
 selector:
   matchLabels:
     role: app
 template:
   metadata:
     labels:
       role: app
   spec:
     containers:
     - name: app
       image: "magalixcorp/k8scicd"
       resources:
         requests:
           cpu: 10m

You may have also noticed that we’re injecting secret variables from Drone to this pipeline step so that we can authenticate and authorize Drone to Kubernetes:

kubernetes_cert is the CA certificate that Kubernetes uses. It can be obtained through the following command:

$ kubectl get secret drone-token-sn77f -o jsonpath='{.data.ca\.crt}' && echo

The next secret is kubernetes_token, which is the token of the service account that we’re using. It can be obtained through:

$ kubectl get secret drone-token-sn77f -o jsonpath='{.data.token}' | base64 --decode && echo

The drone-token-sn77f is the name of the Secret that was created for the service account. It can also be obtained through the following command:

$ kubectl get secrets

Of course, drone is the name of the service account that has the required access to create and manage the involved resources. Creating and authorizing the service account is out of the scope of this article, but you can find a detailed walkthrough in our article Kubernetes Authorization.

Invoking The Pipeline

Before testing the pipeline invocation, we need to make sure that the webhooks are correctly configured on GitHub so that it sends events to the correct endpoint. Going to the settings page of the repository then webhooks, the page should look like this:

drone 4

Now, let’s make a change to the repo, commit and push the code to GitHub. In a few seconds, the pipeline build will be triggered. The following screenshot displays the pipeline running:

drone 5

And the full .drone file is listed here for completeness:

kind: pipeline
name: default
steps:
- name: test
 image: golang:1.10-alpine
 commands:
   - "go test"
- name: build
 image: golang:1.10-alpine
 commands:
   - "go build -o ./myapp"
- name: publish
 image: plugins/docker
 settings:
   repo: magalixcorp/k8scicd
   tags: [ "${DRONE_COMMIT_SHA:0:7}","latest" ]
   username:
     from_secret: docker_username
   password:
     from_secret: docker_password
- name: deliver
 image: sinlead/drone-kubectl
 settings:
   kubernetes_server:
     from_secret: k8s_server
   kubernetes_cert:
     from_secret: k8s_cert
   kubernetes_token:
     from_secret: k8s_token
 commands:
   - kubectl apply -f deployment.yml

The last thing we need to have a look at is an interesting feature of Drone. When running on Kubernetes, Drone runs the pipeline through Kubernetes jobs. Let’s take a look:

$ kubectl get jobs
NAME                             COMPLETIONS   DURATION   AGE
drone-job-1-eksquogvevfu79trt    1/1           27s        35h
drone-job-10-xhuvskgleefv9zljj   1/1           59s        31h
drone-job-11-qp3mutcw3mkl7gfsn   1/1           57s        31h
drone-job-12-bmyuq8acki1uf7ldz   1/1           71s        31h
drone-job-13-jdkyeuxaemksq8jjw   1/1           73s        31h

TL;DR

  • Drone CI is one of the newest CI/CD tools - it’s open-source, written in Go, and was released in 2014.
  • Drone treats Docker containers as first-class citizens. Unlike other build tools like Jenkins, Drone uses containers in all the build steps.
  • Drone also has a number of plugins that can be dropped in the pipeline directly.
  • To receive its instructions, Drone uses a special YAML file called .drone (the name can be changed in the settings to something else if you need). The file is part of the repository, which makes it easier to make “atomic” changes to the code.
  • Drone works by integrating the code repository out of the box. All the major online Git services are supported like GitHub, Bitbucket, and GitLab among others.
  • In this lab, we demonstrated a simple CD pipeline in which we used a simple API written in Go. Using Drone, we were able to build, test, Dockerize and deliver our application automatically to the target environment.
  • When Drone is run as a Pod on Kubernetes, it makes use of the features offered by the system. For example, the pipeline is run through Kubernetes jobs.

Comments and Responses

Related Articles

DevOps, Kubernetes, cost saving, K8s
Kubernetes Cost Optimization 101

Over the past two years at Magalix, we have focused on building our system, introducing new features, and

Read more
The Importance of Using Labels in Your Kubernetes Specs: A Guide

Even a small Kubernetes cluster may have hundreds of Containers, Pods, Services and many other Kubernetes API

Read more
How to Deploy a React App to a Kubernetes Cluster

Kubernetes is a gold standard in the industry for deploying containerized applications in the cloud. Many

Read more

start your 14-day free trial today!

Automate your Kubernetes cluster optimization in minutes.

Get started View Pricing
No Card Required