DOCKER Cookbook

Understanding Docker Fundamentals

What is containerization?

Containerization is a method of packaging, distributing, and running applications in isolated environments called containers. Unlike traditional virtualization, where each virtual machine (VM) includes a full operating system, containerization isolates applications at the operating system level, allowing them to share the same OS kernel while maintaining separate file systems, libraries, and dependencies.

Key points about containerization:

  • Isolation: Containers provide process isolation, ensuring that applications running within them do not interfere with each other or the underlying host system.
  • Lightweight: Containers are lightweight and portable, consuming fewer resources compared to VMs, making them faster to start up and easier to deploy.
  • Consistency: Containers encapsulate all dependencies, libraries, and configurations required to run an application, ensuring consistent behavior across different environments.
  • Scalability: Containerized applications can be scaled horizontally by spinning up multiple instances of the same container, making it easier to handle varying workloads and traffic demands.

Why Docker? Understanding the Benefits.

Docker is one of the most popular containerization platforms, known for its simplicity, portability, and ecosystem support. Some key benefits of using Docker include:

  • Standardization: Docker provides a standardized format for packaging applications and their dependencies into containers, ensuring consistency across development, testing, and production environments.
  • Portability: Docker containers can run on any platform that supports Docker, making it easy to deploy applications across different cloud providers, on-premises servers, and developer workstations.
  • Efficiency: Docker containers share the same OS kernel and use minimal resources, resulting in faster startup times, reduced memory overhead, and improved resource utilization compared to traditional VMs.
  • Isolation: Docker containers provide lightweight, isolated environments for running applications, enabling developers to work on multiple projects simultaneously without interference.
  • Ecosystem: Docker has a rich ecosystem of tools and services, including Docker Hub for sharing and discovering container images, Docker Compose for defining multi-container applications, and Docker Swarm for orchestrating container clusters.

Overview of Docker's Architecture:

Docker's architecture consists of several components that work together to manage and run containers:

  • Docker Engine: The Docker Engine is the core component responsible for building, running, and managing Docker containers. It includes the Docker daemon, which runs on the host machine, and the Docker client, which provides a command-line interface for interacting with the daemon.
  • Docker Images: Docker images are read-only templates used to create Docker containers. An image includes everything needed to run an application, such as code, libraries, dependencies, and configuration files. Images are created using Dockerfiles, which specify the instructions for building the image.
  • Docker Containers: Docker containers are lightweight, runnable instances of Docker images. Each container runs in its isolated environment, with its filesystem, networking, and process space. Containers can be started, stopped, paused, and deleted using Docker commands.
  • Docker Hub: Docker Hub is a cloud-based registry service provided by Docker, Inc. It serves as a central repository for storing and sharing Docker images. Users can search for public images, publish their own images, and collaborate with others by sharing containerized applications.

Installation and Setup of Docker

1. Installing Docker on Linux:

Package Manager Installation (Ubuntu/Debian):

Update the package repository:

sudo apt update

Install Docker using the apt package manager:

sudo apt install docker.io

Package Manager Installation (CentOS/RHEL):

Update the package repository:

sudo yum update

Install Docker using the yum package manager:

sudo yum install docker

Installation Script (Generic Linux):

Run the Docker installation script:

curl -fsSL https://get.docker.com | sh

Manual Installation (Advanced Users):

Download the Docker Engine binary for Linux.

Extract the binary and follow the manual installation instructions provided.

2. Installing Docker on macOS:

Docker Desktop Installation:

Download Docker Desktop for macOS from the official Docker website.

Run the installer and follow the on-screen instructions to complete the installation.

Package Manager Installation (macOS):

Install Docker using Homebrew package manager:

brew install docker

3. Installing Docker on Windows:

Docker Desktop Installation:

Download Docker Desktop for Windows from the official Docker website.

Run the installer and follow the on-screen instructions to complete the installation.

Chocolatey Package Manager Installation (Windows):

Install Docker Desktop using the Chocolatey package manager:

choco install docker-desktop

Verifying the Installation and Basic Configuration:

1. Verifying Installation (All Platforms):

Check Docker Version:

Open a terminal or command prompt and run:

docker --version

Run Hello-World Container:

Test Docker installation by running a hello-world container:

docker run hello-world

2. Basic Configuration (All Platforms):

Start Docker Service (Linux Only):

Start the Docker service if it's not already running:

sudo systemctl start docker

Enable Docker Service (Linux Only):

Enable Docker to start on boot:

sudo systemctl enable docker

Docker Preferences (macOS and Windows):

Configure Docker preferences through the Docker Desktop application, adjusting resource allocation and other settings as needed.

Docker Desktop Dashboard (macOS and Windows):

Launch Docker Desktop application to access the Docker Dashboard, where you can manage containers, images, networks, and volumes.

Docker Images

Dockerfile Deep Dive :

Dockerfile Syntax and Instructions

FROM:

The FROM instruction specifies the base image from which the Docker image will be built.

Example: FROM ubuntu:20.04

LABEL:

The LABEL instruction adds metadata to the Docker image, providing valuable information such as the maintainer's contact details or version numbers.

Example: LABEL maintainer="John Doe <john@example.com>"

RUN:

The RUN instruction executes shell commands within the Docker container during the build process.

Example: RUN apt-get update && apt-get install -y python

COPY / ADD:

The COPY and ADD instructions copy files or directories from the host machine into the Docker image.

Example: COPY app.py /app/

WORKDIR:

The WORKDIR instruction sets the working directory for subsequent instructions in the Dockerfile.

Example: WORKDIR /app

EXPOSE:

The EXPOSE instruction documents the network ports on which the container listens at runtime.

Example: EXPOSE 80

CMD / ENTRYPOINT:

The CMD and ENTRYPOINT instructions define the command to execute when the Docker container starts.

Examples: CMD ["python", "app.py"], ENTRYPOINT ["python"]

ENV:

The ENV instruction sets environment variables within the Docker container.

Example: ENV APP_VERSION=1.0

USER:

The USER instruction sets the user or UID that the container process runs as when the container starts.

Example: USER myuser

ARG:

The ARG instruction defines build-time variables that are accessible only during the build process. These variables can be overridden at build time.

Example: ARG VERSION=latest

VOLUME:

The VOLUME instruction creates a mount point with the specified name and marks it as externally mounted.

Example: VOLUME /var/log

HEALTHCHECK:

The HEALTHCHECK instruction defines a command to periodically check the container's health status.

Example: HEALTHCHECK CMD curl --fail http://localhost/ || exit 1

COPY --from:

The COPY --from instruction copies files or directories from another stage in a multi-stage build.

Example: COPY --from=builder /app/build /var/www/html

RUN ["executable", "param1", "param2"]:

This form of the RUN instruction allows running commands without shell processing.

Example: RUN ["apt-get", "update"]

ONBUILD:

The ONBUILD instruction adds triggers to the image. These triggers will be executed when the image is used as the base for another build.

Example: ONBUILD ADD . /app/src

STOPSIGNAL:

The STOPSIGNAL instruction sets the system call signal that will be sent to the container to stop it gracefully.

Example: STOPSIGNAL SIGTERM

SHELL:

The SHELL instruction allows the default shell used for the RUN, CMD, and ENTRYPOINT instructions to be overridden.

Example: SHELL ["/bin/bash", "-c"]

MAINTAINER:

Though similar to LABEL, MAINTAINER was used to specify the maintainer of the Dockerfile. However, it's now considered deprecated, and LABEL should be used instead.

Example: MAINTAINER John Doe "john@example.com"

Comments:

Dockerfiles support comments prefixed with the # symbol. These comments provide clarity and documentation within the Dockerfile.

Example: # This is a comment explaining the purpose of this instruction

Best Practices for Writing Efficient Dockerfiles:

  • Use Official Base Images: DevOps teams prioritize official base images from trusted sources to ensure reliability, security, and compatibility.
  • Minimize Layers: Optimizing Dockerfile layers reduces image size and improves build performance.
  • Leverage Caching: Efficient caching strategies accelerate build times by maximizing Docker layer caching.
  • Use .dockerignore: Exclude unnecessary files and directories from Docker builds to minimize image bloat and enhance security.
  • Optimize COPY/ADD Instructions: Selectively copy only essential files and directories to streamline image creation and reduce overhead.
  • Remove Unused Dependencies: Clean up any temporary files, caches, or package managers' metadata after installing dependencies to reduce the image size.
  • Use Multi-Stage Builds: For complex builds, use multi-stage builds to separate the build environment from the runtime environment. This reduces the size of the final image by eliminating build-time dependencies.
  • Run Containers as Non-Root Users: Whenever possible, run containers as non-root users to minimize security risks. Use USER instruction to switch to a non-root user in the Dockerfile.
  • Document Your Dockerfile: Include comments and labels in your Dockerfile to provide clarity and context to anyone who might be reading or modifying it in the future.

# Use a base image with Python and Nginx installed

FROM python:3.9-slim AS base

# Install Nginx

RUN apt-get update && apt-get install -y nginx

# Set up Nginx configuration

COPY nginx.conf /etc/nginx/nginx.conf

# Set the working directory

WORKDIR /app

# Install Git

RUN apt-get update && apt-get install -y git

# Set up Git configuration (optional)

RUN git config --global user.email "you@example.com"

RUN git config --global user.name "Your Name"

# Copy the Flask application code from the Git repository using a personal access token

ARG GIT_TOKEN

RUN git clone https://<username>:${GIT_TOKEN}@github.com/yourusername/your-repository.git .

# Install Flask and other dependencies

RUN pip install --no-cache-dir -r requirements.txt

# Expose the Flask application port

EXPOSE 5000

# Start Flask app

CMD ["python", "app.py"]

In this Dockerfile:

  • We set up the base image with Python and Nginx installed, and copy the Nginx configuration file.
  • We install Git to enable repository cloning within the Docker container.
  • Optionally, we configure Git with a user email and name.
  • We use a build argument GIT_TOKEN to pass the personal access token securely during the Docker build process.
  • When cloning the Git repository, we use the personal access token in the URL, replacing username with your GitHub username and your-repository.git with the name of your repository.
  • We encapsulate the libraries that need to be installed in requirements.txt and run them with pip
  • We expose 5000 for our app visibility and start the app with cmd when the container starts