Installing Jenkins on Docker: A Beginner's Guide with Dockerfile and Docker Compose

A comprehensive guide to setting up Jenkins on Docker, including a Dockerfile and Docker Compose configuration for a beginner-friendly approach.

Installing Jenkins on Docker: A Beginner's Guide with Dockerfile and Docker Compose

Table of Contents

Introduction and Overview

Introduction: What is Jenkins and Why is it Useful for CI/CD?

Jenkins is an open-source automation server widely used for Continuous Integration (CI) and Continuous Deployment (CD). It helps developers automate tasks related to building, testing, and deploying software. CI/CD pipelines are essential for ensuring that code is continuously integrated and delivered to production in an efficient and reliable manner.

In simpler terms:

  • Continuous Integration (CI) means automatically checking and testing the new code changes developers make to ensure everything works as expected.
  • Continuous Deployment (CD) means automatically deploying those changes to production once they are verified.

Why Jenkins is Useful:

  • Automation: Jenkins helps automate repetitive tasks, freeing up developers to focus on writing code.
  • Integration with Tools: Jenkins integrates with a wide variety of tools like Git, Docker, Kubernetes, and more, making it a central part of most DevOps pipelines.
  • Scalability: You can scale Jenkins to handle large projects and teams by distributing the work across multiple servers or containers.
  • Extensibility: With over 1,800 plugins, Jenkins can be customized to fit almost any workflow or toolchain.

Docker & Jenkins: Why Run Jenkins on Docker?

Running Jenkins on Docker offers a lot of benefits, especially when you’re looking to get started quickly or if you want to run Jenkins in isolated, portable environments.

Benefits of Dockerizing Jenkins:

  1. Easy Setup: Docker allows you to set up Jenkins without worrying about dependencies, as Docker images come pre-configured for Jenkins.
  2. Isolation: With Docker, Jenkins runs in its container, meaning you don’t have to worry about interfering with other services or software on your system.
  3. Portability: Once you’ve set up Jenkins inside a Docker container, you can easily move it to other environments (e.g., development, staging, production) without needing to make changes to the setup.
  4. Version Control: You can version your Jenkins configuration using Docker, making it easy to track changes and roll back if necessary.
  5. Resource Efficiency: Docker containers are lightweight compared to virtual machines, allowing you to run Jenkins more efficiently.

Example:

To make this simpler, imagine trying to set up Jenkins on your computer manually. You’d need to install Java, set environment variables, download Jenkins, and then configure it. With Docker, you can skip all that and simply use a pre-built Jenkins image to run Jenkins with just a single command:

docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts-jdk17

This saves time and reduces the risk of configuration errors.


Prerequisites: What Do You Need?

Before you start, there are a few things you’ll need to have set up on your machine.

  1. Docker: Docker is a tool that allows you to run containers. It’s necessary for creating and managing Jenkins in an isolated environment. You can download Docker from here.
  2. Docker Compose: Docker Compose is a tool that helps you define and run multi-container Docker applications. You’ll use it to orchestrate the Jenkins container alongside other services (if needed). Docker Compose typically comes bundled with Docker Desktop for Windows and Mac. For Linux, you may need to install it separately.

Verifying Your Installation:

After installing Docker and Docker Compose, verify they’re working correctly:

# Check Docker installation
docker --version

# Check Docker Compose installation
docker-compose --version

You should see version information for both tools if they’re installed correctly.

Project Structure

To follow along with this guide, we’ll be creating the following files:

  • Dockerfile: For building a custom Jenkins image
  • docker-compose.yml: For orchestrating Jenkins with other services
  • jenkins-setup.sh: For automating Jenkins configuration

Let’s dive into creating each of these components.


Setting Up Jenkins with Dockerfile

In this section, we’ll walk through how to create a Dockerfile to set up Jenkins inside a Docker container. A Dockerfile is simply a text file that contains instructions on how to build a Docker image. Think of it like a recipe that tells Docker how to assemble your environment (in this case, Jenkins).

Base Image: Use the Official Jenkins Image as a Base

To begin, we’ll use the official Jenkins image from Docker Hub. Using an official image means we don’t have to build Jenkins from scratch, as it’s already pre-configured and maintained by the Jenkins team.

What is a Base Image?

A base image is like the foundation of a house—it’s where everything starts. In this case, the Jenkins base image has all the necessary tools and configurations to run Jenkins.

FROM jenkins/jenkins:lts-jdk17

This command tells Docker to pull the Long-Term Support (LTS) version of Jenkins with Java 17 from Docker Hub.

Why use the official Jenkins image?

The official Jenkins image is preconfigured to run Jenkins with all the dependencies installed. This saves time and ensures that you have a stable, supported version of Jenkins.

Installing Dependencies: Plugins and Tools

System Dependencies

You may need additional system tools for your specific use case. Here’s how to install them:

USER root
RUN apt-get update && apt-get install -y \
    vim \
    curl \
    git \
    lsb-release \
    ca-certificates \
    gnupg

# Clean up APT to reduce image size
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

This command:

  1. Switches to the root user (since installing packages usually requires admin privileges).
  2. Updates the package list on the container and installs several useful tools.
  3. Cleans up unnecessary files to reduce the Docker image size.

Installing Docker CLI (for Docker-in-Docker operations)

If you want Jenkins to build Docker images, you need to install the Docker CLI:

# Add Docker's official GPG key
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add Docker repository
RUN echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker CLI
RUN apt-get update && apt-get install -y docker-ce-cli

# Switch back to jenkins user
USER jenkins

Jenkins Plugins

Jenkins uses plugins to extend its functionality. Instead of installing plugins via the UI, we can automate this process:

# Install Jenkins plugins using jenkins-plugin-cli
RUN jenkins-plugin-cli --plugins \
    git \
    workflow-aggregator \
    blueocean \
    docker-workflow \
    credentials-binding \
    pipeline-utility-steps \
    configuration-as-code

This approach:

  • Uses the newer jenkins-plugin-cli tool (preferred over the older install-plugins.sh).
  • Installs essential plugins for CI/CD pipelines, including Git integration, Docker support, and the modern Blue Ocean UI.

Configuration as Code: Setting Up Jenkins Declaratively

A modern approach to Jenkins configuration is using the Jenkins Configuration as Code (JCasC) plugin, which allows you to define your Jenkins configuration declaratively:

# Create a directory for Jenkins Configuration as Code
RUN mkdir -p /var/jenkins_home/casc_configs

# Copy configuration file
COPY jenkins.yaml /var/jenkins_home/casc_configs/jenkins.yaml

# Tell Jenkins to use Configuration as Code
ENV CASC_JENKINS_CONFIG=/var/jenkins_home/casc_configs/jenkins.yaml

This requires creating a jenkins.yaml file in the same directory as your Dockerfile:

jenkins:
  systemMessage: "Jenkins configured automatically by Jenkins Configuration as Code plugin"
  numExecutors: 2

  securityRealm:
    local:
      allowsSignup: false
      users:
        - id: "admin"
          password: "admin"

  authorizationStrategy:
    loggedInUsersCanDoAnything:
      allowAnonymousRead: false

This approach is far superior to the script-based approach as it’s declarative, version-controllable, and less error-prone.

Exposing Ports: Enabling Jenkins UI Access

Jenkins has a user interface (UI) where users can configure Jenkins, trigger jobs, and monitor pipelines. To access this UI from your browser, you need to expose the port that Jenkins listens on.

EXPOSE 8080
EXPOSE 50000

These commands make ports accessible:

  • 8080: The main web interface for Jenkins.
  • 50000: Used for Jenkins agent communication (when you scale out with build agents).

Full Dockerfile Example

Here’s the complete Dockerfile combining everything we’ve discussed:

# Use the official Jenkins LTS image with Java 17 as the base image
FROM jenkins/jenkins:lts-jdk17

# Switch to root user to install dependencies
USER root

# Install necessary tools and dependencies
RUN apt-get update && apt-get install -y \
    vim \
    curl \
    git \
    lsb-release \
    ca-certificates \
    gnupg \
    && apt-get clean && rm -rf /var/lib/apt/lists/*

# Add Docker's official GPG key
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add Docker repository
RUN echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker CLI
RUN apt-get update && apt-get install -y docker-ce-cli \
    && apt-get clean && rm -rf /var/lib/apt/lists/*

# Switch back to jenkins user
USER jenkins

# Install Jenkins plugins using jenkins-plugin-cli
RUN jenkins-plugin-cli --plugins \
    git \
    workflow-aggregator \
    blueocean \
    docker-workflow \
    credentials-binding \
    pipeline-utility-steps \
    configuration-as-code

# Create a directory for Jenkins Configuration as Code
RUN mkdir -p /var/jenkins_home/casc_configs

# Copy configuration file
COPY jenkins.yaml /var/jenkins_home/casc_configs/jenkins.yaml

# Tell Jenkins to use Configuration as Code
ENV CASC_JENKINS_CONFIG=/var/jenkins_home/casc_configs/jenkins.yaml

# Expose Jenkins ports
EXPOSE 8080
EXPOSE 50000

Building Your Jenkins Docker Image

Once you’ve created the Dockerfile, build your custom Jenkins image:

docker build -t my-jenkins:latest .

This creates a Docker image named my-jenkins with the tag latest.

Running Your Jenkins Docker Container

To run Jenkins using your custom image:

docker run -d -p 8080:8080 -p 50000:50000 --name jenkins my-jenkins:latest

Now you can access Jenkins at http://localhost:8080.


Running Jenkins with Docker Compose

Introduction

In this section, we will explore how to run Jenkins using Docker Compose. Docker Compose simplifies the setup of multi-container applications by allowing you to define and manage multiple containers in a single YAML file. This is particularly useful when you need Jenkins to work with other services (e.g., databases, build agents).

What is Docker Compose?

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you define a multi-container application in a single file (usually named docker-compose.yml), and then you use simple commands to spin up your entire application.

How Does Docker Compose Help Jenkins?

With Jenkins, we often need to configure various services. Some common scenarios include:

  1. Docker-in-Docker: Allowing Jenkins to build Docker images.
  2. Persistent Storage: Ensuring Jenkins data persists across container restarts.
  3. Networking: Setting up networks for Jenkins to communicate with other services.

Docker Compose makes managing these services much easier.

Creating a Docker Compose Configuration

Let’s create a docker-compose.yml file for Jenkins:

version: "3.8"

services:
  jenkins:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: jenkins
    restart: unless-stopped
    environment:
      - DOCKER_HOST=tcp://docker:2376
      - DOCKER_CERT_PATH=/certs/client
      - DOCKER_TLS_VERIFY=1
    volumes:
      - jenkins-data:/var/jenkins_home
      - jenkins-docker-certs:/certs/client:ro
    ports:
      - "8080:8080"
      - "50000:50000"
    networks:
      - jenkins-network

  docker:
    image: docker:dind
    container_name: jenkins-docker
    privileged: true
    restart: unless-stopped
    environment:
      - DOCKER_TLS_CERTDIR=/certs
    volumes:
      - jenkins-docker-certs:/certs/client
      - jenkins-data:/var/jenkins_home
    networks:
      jenkins-network:
        aliases:
          - docker
    command: --storage-driver overlay2

networks:
  jenkins-network:
    driver: bridge

volumes:
  jenkins-data:
  jenkins-docker-certs:

Let’s break down this configuration:

Jenkins Service

jenkins:
  build:
    context: .
    dockerfile: Dockerfile
  container_name: jenkins
  restart: unless-stopped
  environment:
    - DOCKER_HOST=tcp://docker:2376
    - DOCKER_CERT_PATH=/certs/client
    - DOCKER_TLS_VERIFY=1
  volumes:
    - jenkins-data:/var/jenkins_home
    - jenkins-docker-certs:/certs/client:ro
  ports:
    - "8080:8080"
    - "50000:50000"
  networks:
    - jenkins-network

This section:

  • Build: Builds the Jenkins image using our Dockerfile.
  • Restart: Ensures Jenkins restarts automatically unless explicitly stopped.
  • Environment: Sets variables needed for secure Docker communication.
  • Volumes: Connects Jenkins to persistent storage volumes.
  • Ports: Maps container ports to host ports.
  • Networks: Connects Jenkins to a custom network.

Docker-in-Docker Service

docker:
  image: docker:dind
  container_name: jenkins-docker
  privileged: true
  restart: unless-stopped
  environment:
    - DOCKER_TLS_CERTDIR=/certs
  volumes:
    - jenkins-docker-certs:/certs/client
    - jenkins-data:/var/jenkins_home
  networks:
    jenkins-network:
      aliases:
        - docker
  command: --storage-driver overlay2

This section:

  • Image: Uses the official Docker-in-Docker image.
  • Privileged: Grants the container privileged access (required for DinD).
  • Volumes: Shares certificate and data volumes with Jenkins.
  • Networks: Joins the same network as Jenkins with an alias for easy discovery.
  • Command: Specifies the storage driver for best performance.

Networks and Volumes

networks:
  jenkins-network:
    driver: bridge

volumes:
  jenkins-data:
  jenkins-docker-certs:

This section:

  • Networks: Creates a bridge network for container communication.
  • Volumes: Defines persistent volumes for Jenkins data and Docker certificates.

Running Jenkins with Docker Compose

After creating the docker-compose.yml file, start Jenkins with:

docker-compose up -d

This command starts all services defined in the Docker Compose file in detached mode (background). Jenkins will be accessible at http://localhost:8080.

To stop Jenkins:

docker-compose down

To stop Jenkins and remove volumes:

docker-compose down -v

Automating Jenkins Setup

Automating Jenkins setup can save time and ensure consistency, especially if you’re setting up multiple Jenkins instances or managing configurations across environments.

Modern Approach: Jenkins Configuration as Code (JCasC)

The most modern and recommended approach for automating Jenkins configuration is using the Jenkins Configuration as Code (JCasC) plugin, which we already included in our Dockerfile.

What is JCasC?

JCasC allows you to define your entire Jenkins configuration in a YAML file, including:

  • System settings
  • Credentials
  • Jobs
  • Plugins
  • Security settings

Creating a Complete JCasC File

Let’s expand our jenkins.yaml file to include more configuration:

jenkins:
  systemMessage: "Jenkins configured automatically by Jenkins Configuration as Code plugin"
  numExecutors: 2
  labelString: "docker-jenkins"
  mode: NORMAL

  securityRealm:
    local:
      allowsSignup: false
      users:
        - id: "admin"
          password: "admin"

  authorizationStrategy:
    loggedInUsersCanDoAnything:
      allowAnonymousRead: false

  globalNodeProperties:
    - envVars:
        env:
          - key: "EXAMPLE_KEY"
            value: "EXAMPLE_VALUE"

credentials:
  system:
    domainCredentials:
      - credentials:
          - usernamePassword:
              scope: GLOBAL
              id: "my-github-creds"
              username: "github-user"
              password: "github-token"
              description: "GitHub Credentials"

jobs:
  - script: >
      pipelineJob('example-pipeline') {
        definition {
          cps {
            script('''
              pipeline {
                agent any
                stages {
                  stage('Hello') {
                    steps {
                      echo 'Hello World'
                    }
                  }
                }
              }
            ''')
            sandbox(true)
          }
        }
      }      

This configuration:

  1. Sets up basic Jenkins system settings
  2. Creates a user account
  3. Configures security
  4. Sets environment variables
  5. Creates credentials
  6. Defines a simple pipeline job

Applying JCasC Configuration

JCasC automatically applies the configuration when Jenkins starts, as we’ve set the CASC_JENKINS_CONFIG environment variable in our Dockerfile.

Legacy Approach: Groovy Init Scripts

For some advanced customizations not covered by JCasC, you can use Groovy init scripts:

# Add Groovy init scripts
COPY init.groovy.d/ /var/jenkins_home/init.groovy.d/

Create a directory init.groovy.d with a script like basic-security.groovy:

#!groovy
import jenkins.model.*
import hudson.security.*

def instance = Jenkins.getInstance()

// Create admin user
def hudsonRealm = new HudsonPrivateSecurityRealm(false)
hudsonRealm.createAccount("admin", "admin")
instance.setSecurityRealm(hudsonRealm)

// Save configuration
instance.save()

Using Jenkins REST API (Alternative Approach)

For environments where JCasC isn’t suitable, you can use the Jenkins REST API:

#!/bin/bash

# Wait for Jenkins to start
until $(curl --output /dev/null --silent --head --fail http://localhost:8080); do
    echo "Waiting for Jenkins to start..."
    sleep 5
done

# Get the Jenkins initial admin password
JENKINS_ADMIN_PASSWORD=$(docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword)

# Install a plugin
curl -X POST -u admin:$JENKINS_ADMIN_PASSWORD \
  http://localhost:8080/pluginManager/installNecessaryPlugins \
  --data-urlencode "plugin.git.version=latest"

# Create an API token
TOKEN_RESPONSE=$(curl -X POST -u admin:$JENKINS_ADMIN_PASSWORD \
  http://localhost:8080/me/descriptorByName/jenkins.security.ApiTokenProperty/generateNewToken \
  --data "newTokenName=automation-token")

# Extract the token
API_TOKEN=$(echo $TOKEN_RESPONSE | grep -o '"tokenValue":"[^"]*' | cut -d':' -f2 | tr -d '"')

# Create a job
curl -X POST -u admin:$API_TOKEN -H "Content-Type: application/xml" \
  http://localhost:8080/createItem?name=example-job \
  --data-binary @job.xml

This script:

  1. Waits for Jenkins to start
  2. Gets the initial admin password
  3. Installs plugins
  4. Creates an API token
  5. Creates a job from an XML definition file

For this approach, you would need a job.xml file with a job definition.


Practical Usage Examples

Let’s explore some practical examples of using our Jenkins setup:

Example 1: Building a Java Application

Create a Jenkins pipeline for building a Java application:

pipeline {
    agent any

    tools {
        maven 'Maven 3.8.6'
        jdk 'JDK 17'
    }

    stages {
        stage('Checkout') {
            steps {
                git 'https://github.com/example/java-app.git'
            }
        }

        stage('Build') {
            steps {
                sh 'mvn clean package'
            }
        }

        stage('Test') {
            steps {
                sh 'mvn test'
            }
            post {
                always {
                    junit '**/target/surefire-reports/*.xml'
                }
            }
        }

        stage('Archive') {
            steps {
                archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
            }
        }
    }
}

Example 2: Building and Publishing a Docker Image

Create a pipeline for building and pushing a Docker image:

pipeline {
    agent any

    environment {
        DOCKER_REGISTRY = 'docker.io'
        DOCKER_IMAGE = 'myusername/my-app'
        DOCKER_CREDENTIALS_ID = 'dockerhub-credentials'
    }

    stages {
        stage('Checkout') {
            steps {
                git 'https://github.com/example/docker-app.git'
            }
        }

        stage('Build Image') {
            steps {
                script {
                    docker.build("${DOCKER_IMAGE}:${env.BUILD_NUMBER}")
                }
            }
        }

        stage('Push Image') {
            steps {
                script {
                    docker.withRegistry("https://${DOCKER_REGISTRY}", DOCKER_CREDENTIALS_ID) {
                        docker.image("${DOCKER_IMAGE}:${env.BUILD_NUMBER}").push()
                        docker.image("${DOCKER_IMAGE}:${env.BUILD_NUMBER}").push('latest')
                    }
                }
            }
        }
    }
}

Example 3: Multi-environment Deployment

Pipeline for deploying to multiple environments:

pipeline {
    agent any

    environment {
        APP_NAME = 'my-application'
    }

    stages {
        stage('Build') {
            steps {
                // Build steps here
                echo 'Building application...'
            }
        }

        stage('Test') {
            steps {
                // Test steps here
                echo 'Testing application...'
            }
        }

        stage('Deploy to Dev') {
            steps {
                echo 'Deploying to development environment...'
                // Deployment steps for dev
            }
        }

        stage('Deploy to Staging') {
            input {
                message "Deploy to staging?"
                ok "Yes, deploy it!"
            }
            steps {
                echo 'Deploying to staging environment...'
                // Deployment steps for staging
            }
        }

        stage('Deploy to Production') {
            input {
                message "Deploy to production?"
                ok "Yes, deploy it!"
            }
            steps {
                echo 'Deploying to production environment...'
                // Deployment steps for production
            }
        }
    }

    post {
        success {
            echo 'Pipeline completed successfully!'
        }
        failure {
            echo 'Pipeline failed!'
        }
    }
}

Conclusion

Key Takeaways

  1. Jenkins on Docker provides flexibility and isolation: Running Jenkins in Docker containers offers isolation, portability, and easier maintenance compared to traditional installations.

  2. Dockerfile customization makes Jenkins setup repeatable: By creating a custom Dockerfile, you can ensure that your Jenkins instance is configured consistently with all the necessary plugins and tools.

  3. Docker Compose simplifies multi-container setups: Docker Compose allows you to orchestrate Jenkins with other services like Docker-in-Docker, databases, or other tools, making complex setups easier to manage.

  4. Configuration as Code is the modern approach: Jenkins Configuration as Code (JCasC) is the recommended way to automate Jenkins configuration, replacing older script-based approaches with declarative YAML configuration.

  5. Pipelines enable powerful CI/CD workflows: Jenkins pipelines allow you to define your entire CI/CD workflow as code, making it versionable, testable, and maintainable.

Best Practices

  1. Keep container images lean: Only install the plugins and tools you actually need to minimize image size and startup time.

  2. Store configurations in version control: Store your Dockerfile, docker-compose.yml, and JCasC configurations in version control to track changes and enable easier recovery.

  3. Use persistent volumes: Always use volumes for Jenkins data to ensure your configurations, jobs, and build history survive container restarts or updates.

  4. Regularly update base images: Periodically update your Jenkins images to get the latest security patches and features.

  5. Implement proper backup strategies: Even with persistent volumes, implement regular backups of your Jenkins data.

Next Steps

Now that you have Jenkins running in Docker, consider exploring:

  1. Jenkins Distributed Builds: Scale your CI/CD by setting up Jenkins agents to distribute build workloads.

  2. Jenkins Pipeline Libraries: Create reusable pipeline components to standardize workflows across projects.

  3. Integration with Kubernetes: Deploy Jenkins on Kubernetes for even greater scalability and resilience.

  4. Advanced Security Configurations: Implement more robust security measures like OAuth integration or role-based access control.

  5. Monitoring and Alerting: Set up monitoring for your Jenkins instance to track performance and detect issues.

By following this guide, you’ve taken a significant step toward modernizing your CI/CD infrastructure with containerized Jenkins. The combination of Jenkins and Docker provides a powerful, flexible foundation for automating your software delivery pipeline.

Table of Contents