Skip to main content

Docker part-2 Advanced Commands

 

What is a Layer?

A layer is essentially an intermediate file system snapshot generated during the build process. Layers are stacked to form the final Docker image.

  • Each line (instruction) in a Dockerfile adds a new layer.
  • Layers are immutable: Once created, they cannot be modified.
  • Layers are shared: If multiple images share the same base or instructions, Docker reuses the existing layers instead of creating new ones.

Benefits of Layers

  1. Storage Efficiency:

    • Layers are stored independently.
    • If multiple images use the same base image or layers, Docker only stores them once.
    • This reduces disk usage significantly.
  2. Build Optimization:

    • Docker caches layers. When rebuilding an image, Docker skips unchanged layers and reuses cached ones.
    • Only layers affected by changes in the Dockerfile are rebuilt.
  3. Fast Deployment:

    • When pulling or pushing images, Docker transfers only the layers that are missing on the target machine.

How Layers Work in a Dockerfile

Here’s an example Dockerfile to illustrate layers:

Dockerfile

# Layer 1: Base image FROM ubuntu:20.04 # Layer 2: Update package lists RUN apt-get update # Layer 3: Install curl RUN apt-get install -y curl # Layer 4: Add application code COPY app.py /app # Layer 5: Default command CMD ["python3", "/app/app.py"]
  • Each RUN, COPY, or CMD adds a new layer.
  • If you modify COPY app.py /app, only Layer 4 and subsequent layers are rebuilt.
  • Layers 1–3 remain cached, significantly speeding up the build process.

Best Practices for Layer Management

  1. Combine Commands to Minimize Layers:

    • Each RUN command creates a layer. Combine commands to reduce the number of layers.
    • Example:
      Dockerfile

      RUN apt-get update && apt-get install -y curl
    • This creates a single layer for updating and installing.
  2. Order Commands for Maximum Caching:

    • Place commands least likely to change early in the Dockerfile.
    • Example:
      Dockerfile

      FROM ubuntu:20.04 RUN apt-get update RUN apt-get install -y curl COPY app.py /app
    • Changes to app.py won’t invalidate the earlier cached layers.
  3. Avoid Adding Unnecessary Files:

    • Use .dockerignore to exclude files you don’t need in the build context.
    • Example .dockerignore:
      bash

      .git node_modules *.log

Checking Layer Details

Use the docker history command to see the layers of an image:

bash

docker history python_demo

Example Output:

data

IMAGE CREATED CREATED BY SIZE 69c0e1234567 1 minute ago CMD ["python3" "/app/app.py"] 0B b32056438b5e 2 minutes ago COPY app.py /app 4kB d13c942271d6 3 weeks ago RUN apt-get update && apt-get install -y curl 29MB

Multi-Stage Builds (With Examples)

What Are Multi-Stage Builds?

Multi-stage builds allow you to use multiple FROM instructions in a single Dockerfile. Each stage can produce artifacts, such as compiled code, which are passed to later stages. The final image includes only what’s necessary for the application to run, keeping it lightweight.


HOW TO BUILD A DOCKER FILE:

1. FROM: Importing Base Images

What is a Base Image?

A base image is the foundation of every Docker image. It is the starting point for creating your custom Docker image. Base images can be:

  • Minimal Operating Systems (e.g., ubuntu, debian).
  • Language-Specific Images (e.g., python, node, java).
  • Pre-configured Images for specific tasks (e.g., tensorflow, flask).

Why Do We Use Base Images?

  1. Consistency:
    • A base image provides a standard environment for your application. This ensures the same dependencies and configurations are available, regardless of where the container runs.
  2. Ease of Use:
    • Base images save time by pre-configuring environments (e.g., Python installation in python images).
  3. Customizability:
    • You can extend a base image by adding your own software, files, or configuration.
  4. Portability:
    • Applications packaged with base images can run on any machine with Docker installed, regardless of the host OS.

Example: Using FROM

  1. Operating System Base Image:

    Dockerfile

    FROM ubuntu:20.04
    • Starts with the Ubuntu 20.04 operating system.
    • Used when you need a generic Linux environment.
  2. Language-Specific Base Image:

    Dockerfile

    FROM python:3.9-slim
    • Starts with Python 3.9 pre-installed.
    • Slim Variant: A lighter version with fewer pre-installed tools to reduce image size.
  3. Specialized Base Image:

    Dockerfile

    FROM tensorflow/tensorflow:latest
    • Starts with a TensorFlow image, pre-configured for machine learning applications.

How to Choose a Base Image?

  1. Application Requirements:
    • Use python:3.9-slim for Python apps or node:16-alpine for Node.js apps.
  2. Performance:
    • Use minimal images (e.g., alpine) for smaller and faster images.
  3. Community Support:
    • Official images (e.g., python, nginx) are maintained by Docker or language communities, ensuring reliability.
  4. Security:
    • Prefer official images or trusted sources to avoid vulnerabilities.

What Happens When You Use FROM?

  • Docker pulls the specified base image (if not already available locally) from Docker Hub or a private registry.
  • Example:
    bash

    docker pull python:3.9-slim
    Output:
    makefile

    3.9-slim: Pulling from library/python Digest: sha256:... Status: Downloaded newer image for python:3.9-slim

Task:

  1. Create a simple Dockerfile with a base image:

    Dockerfile

    FROM python:3.9-slim CMD ["python", "--version"]
  2. Build and run the image:

    bash

    docker build -t python_base_example . docker run python_base_example
  3. Try changing the base image to alpine and observe the differences in size and functionality:

    Dockerfile

    FROM python:3.9-alpine CMD ["python", "--version"]

 WORKDIR: Setting the Working Directory

What Does WORKDIR Do?

  • WORKDIR sets the working directory inside the container where all subsequent commands (like COPY, RUN, or CMD) will execute.
  • It ensures a consistent directory structure and avoids the need to specify absolute paths in commands.

Why Use WORKDIR?

  1. Simplifies Commands:
    • Without WORKDIR, every file operation (e.g., COPY, RUN) requires specifying the full path.
    • Example without WORKDIR:
      Dockerfile

      COPY app.py /app/app.py CMD ["python", "/app/app.py"]
      Example with WORKDIR:
      Dockerfile

      WORKDIR /app COPY app.py . CMD ["python", "app.py"]
  2. Improves Readability:
    • It makes the Dockerfile cleaner and easier to follow.
  3. Reduces Errors:
    • It ensures all commands operate relative to a specific directory.

Without WORKDIR

If you don’t use WORKDIR, you’d need to adjust paths:

Dockerfile

FROM python:3.9-slim COPY app.py /myapp/app.py CMD ["python", "/myapp/app.py"]

This works but is less clean and prone to errors if paths change.

 COPY: Transferring Files from Host to Container

What Does COPY Do?

  • The COPY instruction copies files or directories from the host machine (your local system) to the container's filesystem during the build process.

Why Use COPY?

  1. Include Application Files:
    • Transfer your application code, configuration files, or dependencies into the container.
  2. Preserve File Structure:
    • COPY maintains the original directory structure of the files being copied.
  3. Improved Security:
    • Compared to alternatives like ADD, COPY is simpler and avoids unintentional behaviors like auto-extracting archives.

Syntax

Dockerfile

COPY [source] [destination]
  • source: Path on the host machine (relative to the build context).
  • destination: Path inside the container.

Example: Copying a Single File

  1. Folder Structure:

    . ├── app.py └── Dockerfile
  2. Dockerfile:

    Dockerfile

    FROM python:3.9-slim WORKDIR /app COPY app.py . CMD ["python", "app.py"]
  3. Explanation:

    • COPY app.py .:
      • Copies app.py from the build context (host) to the working directory (/app) inside the container.
  4. Command to Build and Run:

    bash

    docker build -t copy_example . docker run copy_example
  5. Output:

    data

    Hello from COPY!

Example: Copying a Directory

  1. Folder Structure:

    css

    . ├── src/ │ ├── app.py │ └── helper.py └── Dockerfile
  2. Dockerfile:

    Dockerfile

    FROM python:3.9-slim WORKDIR /app COPY src/ . CMD ["python", "app.py"]
  3. Explanation:

    • COPY src/ .:
      • Copies all files inside the src directory from the build context to the working directory (/app) inside the container.
  4. Command to Build and Run:

    bash

    docker build -t copy_dir_example . docker run copy_dir_example

Using Wildcards in COPY

  1. Folder Structure:

    . ├── Dockerfile ├── app.py ├── requirements.txt ├── readme.md
  2. Dockerfile:

    Dockerfile

    FROM python:3.9-slim WORKDIR /app COPY *.py . # Copies all Python files CMD ["python", "app.py"]
  3. Explanation:

    • COPY *.py .:
      • Copies only Python files from the build context to the working directory (/app).

Common Mistakes with COPY

  1. Incorrect Build Context:

    • The source must be relative to the build context (the directory you specify when running docker build .).
    • Example: If your build context is ./my_project, COPY ../app.py . will fail because ../app.py is outside the context.
  2. Forgetting Relative Paths:

    • Always specify the relative path for source when using COPY.

 RUN: Executing Commands During Build

What Does RUN Do?

  • The RUN instruction executes commands in a new layer of the image during the build process.
  • It’s typically used to install software, update packages, or configure the environment.

Why Use RUN?

  1. Customizing the Image:
    • You can install required tools, libraries, or dependencies.
  2. Preparing the Environment:
    • Configure the environment to match the application’s needs (e.g., setting up OS packages).
  3. Caching:
    • Since each RUN command creates a new layer, Docker caches the results. If nothing changes in the layer, Docker reuses the cache, speeding up builds.

Syntax

Dockerfile

RUN <command>
  • The <command> is executed inside the container during the build process.

Examples of RUN

1. Installing Dependencies

Dockerfile

FROM python:3.9-slim RUN apt-get update && apt-get install -y curl
  • apt-get update: Updates the package lists.
  • apt-get install -y curl: Installs the curl tool without prompting for confirmation (-y).

2. Installing Python Libraries

Dockerfile

FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt
  • Explanation:
    • pip install: Installs Python packages specified in requirements.txt.
    • --no-cache-dir: Prevents caching to reduce image size.

3. Combining Commands

You can combine multiple commands in a single RUN instruction using &&:

Dockerfile

RUN apt-get update && apt-get install -y \ curl \ vim \ git \ && apt-get clean
  • Explanation:
    • Combining commands into one RUN minimizes the number of image layers.
    • apt-get clean: Removes temporary files to reduce the image size.

Best Practices for RUN

  1. Minimize Layers:

    • Combine related commands to reduce the number of layers.
    • Example:
      Dockerfile

      RUN apt-get update && apt-get install -y curl vim
      Instead of:
      Dockerfile

      RUN apt-get update RUN apt-get install -y curl RUN apt-get install -y vim
  2. Clean Temporary Files:

    • Always remove temporary files created during installation.
    • Example:
      Dockerfile

      RUN apt-get update && apt-get install -y curl && apt-get clean
  3. Use Specific Versions:

    • Install specific versions of libraries to ensure consistency.
    • Example:
      Dockerfile

      RUN pip install flask==2.0.3

What Does apt-get clean Do?

The apt-get clean command:

  • Deletes downloaded package files in /var/cache/apt/archives.
  • Frees up space by removing unused files related to package installations.

CMD: Setting the Default Command

What Does CMD Do?

The CMD instruction specifies the default command to be executed when a container is started. It is like saying, “When someone runs this container, here’s what it should do by default.”


Why Use CMD?

  1. Define the Main Process:
    • Specifies the primary task the container should perform (e.g., running a web server or script).
  2. Allow Overriding:
    • The command defined in CMD can be overridden when running the container.

Syntax

There are three forms of CMD:

  1. Shell Form (executes in a shell like /bin/sh):

    Dockerfile

    CMD python app.py
    • Equivalent to:
      bash

      /bin/sh -c "python app.py"
  2. Exec Form (recommended, more precise):

    Dockerfile

    CMD ["python", "app.py"]
    • Executes the command directly without involving a shell.
    • Avoids issues like shell injection.
  3. Default Parameters:

    Dockerfile

    CMD ["python", "app.py", "--debug"]
    • Specifies default arguments for the command.

Examples of CMD

1. Simple Python Application

Dockerfile:
Dockerfile

FROM python:3.9-slim WORKDIR /app COPY app.py . CMD ["python", "app.py"]
app.py:
python

print("Hello, Docker CMD!")
Steps to Build and Run:
  1. Build the image:
    bash

    docker build -t cmd_example .
  2. Run the container:
    bash

    docker run cmd_example
  3. Output:
    objectivec

    Hello, Docker CMD!

2. Overriding CMD at Runtime

You can override the CMD instruction when running a container:

bash

docker run cmd_example python --version

Output:

Python 3.9.7
  • Here, python --version overrides the default CMD defined in the Dockerfile.

3. Setting Default Arguments

You can pass default arguments in CMD:

Dockerfile

CMD ["python", "app.py", "--debug"]
  • If the container runs without additional arguments, --debug is used.
  • You can override it by passing different arguments:
    bash

    docker run cmd_example python app.py --prod

Example: Using CMD Arguments in Python Code

Dockerfile:

Dockerfile

FROM python:3.9-slim WORKDIR /app COPY app.py . CMD ["python", "app.py", "apple"]

app.py:

python

import sys # Accessing the arguments passed via CMD if len(sys.argv) > 2: # Check if an argument is provided arg = sys.argv[2] # CMD args start at index 2 (0=python, 1=app.py) print(f"Received argument: {arg}") # Example of using the argument if arg == "apple": print("You passed an apple!") elif arg == "banana": print("You passed a banana!") else: print(f"I don't recognize this: {arg}") else: print("No arguments provided.")

Common Mistakes

  1. Using Multiple CMD Instructions:
    • Only the last CMD in the Dockerfile is effective. Earlier ones are ignored.
    • Example:
      Dockerfile

      CMD ["python", "app.py"] # This is ignored CMD ["python", "other_script.py"] # This takes effect

ENTRYPOINT: Defining a Fixed Command

What Does ENTRYPOINT Do?

  • The ENTRYPOINT instruction specifies the command that will always run when the container starts.
  • Unlike CMD, ENTRYPOINT is designed to make the container behave like a dedicated executable.
  • You can still pass arguments to the ENTRYPOINT command at runtime.

Difference Between ENTRYPOINT and CMD

FeatureENTRYPOINTCMD
PurposeDefines the core command for the container.Defines the default command/arguments.
Overridable?Arguments can be added, but the command is fixed.Entire command can be replaced at runtime.
Use CaseWhen the container has a primary task.When a default task can be overridden.

ENTRYPOINT Syntax

  1. Exec Form (recommended):

    Dockerfile

    ENTRYPOINT ["executable", "param1", "param2"]
    • Example:
      Dockerfile

      ENTRYPOINT ["python", "app.py"]
  2. Shell Form (less secure and flexible):

    Dockerfile

    ENTRYPOINT command param1 param2
    • Example:
      Dockerfile

      ENTRYPOINT python app.py

Combining ENTRYPOINT with CMD

You can combine ENTRYPOINT with CMD to specify the command and provide default arguments:

Example:

Dockerfile

ENTRYPOINT ["python", "app.py"] CMD ["--debug"]
  • ENTRYPOINT defines the fixed command (python app.py).
  • CMD provides default arguments (--debug), which can be overridden.

How ENTRYPOINT Works

1. Dockerfile Example

Dockerfile

FROM python:3.9-slim WORKDIR /app COPY app.py . ENTRYPOINT ["python", "app.py"] CMD ["--debug"]

2. app.py

python

import sys if "--debug" in sys.argv: print("Debug mode activated.") elif "--prod" in sys.argv: print("Production mode activated.") else: print("No mode specified.")

3. Build and Run

  1. Build the Image:

    bash

    docker build -t entrypoint_example .
  2. Run the Container with Default CMD:

    bash

    docker run entrypoint_example

    Output:


    Debug mode activated.
  3. Override CMD Arguments:

    bash

    docker run entrypoint_example --prod

    Output:

    Production mode activated.
  4. Override Both ENTRYPOINT and CMD: If you need to replace the entire ENTRYPOINT at runtime, use the --entrypoint flag:

    bash

    docker run --entrypoint "python" entrypoint_example --version

    Output:

    Python 3.9.7

When to Use ENTRYPOINT

  • Dedicated Containers:

    • Use ENTRYPOINT for containers that should always execute a specific program (e.g., web servers, scripts).
    • Example:
      Dockerfile

      ENTRYPOINT ["nginx", "-g", "daemon off;"]
  • Flexible Arguments:

    • Combine ENTRYPOINT with CMD to allow passing different arguments at runtime.

EXPOSE: Declaring Ports

What Does EXPOSE Do?

  • The EXPOSE instruction informs Docker that the container will listen on a specified network port at runtime.
  • It is a documentation feature that helps other developers or tools understand which ports the container uses.
  • It does not actually map ports to the host machine — that’s done with the -p or -P flag when running the container.

Why Use EXPOSE?

  1. Documentation:

    • Helps clarify which ports the containerized application expects to communicate on.
    • Example: A web server may expose port 80 for HTTP traffic.
  2. Networking with Other Containers:

    • In Docker Compose or container-to-container networking, EXPOSE can make ports available within the Docker network.

Syntax

Dockerfile

EXPOSE <port>[/<protocol>]
  • <port>: The port number the application listens on inside the container.
  • [/<protocol>] (optional): Defaults to TCP. You can specify UDP if needed.

Examples

1. Expose a Single Port

Dockerfile

EXPOSE 5000
  • Declares that the container will listen on port 5000 using the TCP protocol.

2. Expose Multiple Ports

Dockerfile

EXPOSE 5000 8080
  • Declares that the container listens on both ports 5000 and 8080.

3. Specify Protocols

Dockerfile

EXPOSE 5000/tcp EXPOSE 8080/udp
  • Declares that the container uses TCP for port 5000 and UDP for port 8080.

How EXPOSE Works

Example Dockerfile

Dockerfile

FROM python:3.9-slim WORKDIR /app COPY app.py . EXPOSE 5000 CMD ["python", "app.py"]

app.py (Simple Flask App)

python

from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Hello from Flask!" if __name__ == "__main__": app.run(host="0.0.0.0", port=5000)

Build and Run the Image

  1. Build the Image:

    bash

    docker build -t expose_example .
  2. Run the Container:

    bash

    docker run -p 5000:5000 expose_example
    • -p 5000:5000: Maps port 5000 on the host to port 5000 in the container.
  3. Access the Application:

    • Open a browser and go to http://localhost:5000.
    • Output:

      Hello from Flask!

Key Points About EXPOSE

  1. No Automatic Port Mapping:

    • EXPOSE only declares the port; it does not map it to the host. Use -p or -P with docker run to map ports.
  2. Networking with Other Containers:

    • In multi-container setups (e.g., Docker Compose), EXPOSE lets containers communicate on the declared ports without explicitly mapping them to the host.
  3. Optional for Port Mapping:

    • You don’t need EXPOSE to map ports. The -p flag works even without it:
      bash

      docker run -p 5000:5000 expose_example

ENV: Setting Environment Variables

What Does ENV Do?

The ENV instruction allows you to define environment variables that will be available to:

  1. The container’s runtime environment.
  2. Any processes or applications running inside the container.

Why Use ENV?

  1. Parameterization:
    • Pass configuration values like API keys, database URLs, or debug flags to your application.
  2. Flexibility:
    • Customize the container’s behavior without modifying the code or the Dockerfile.
  3. Reusability:
    • Set common variables once and reuse them across the Dockerfile.

Syntax

Dockerfile

ENV <key> <value>
  • <key>: The name of the environment variable.
  • <value>: The value to assign.

You can also define multiple variables in one line:

Dockerfile

ENV VAR1=value1 VAR2=value2

Examples

1. Setting a Single Environment Variable

Dockerfile

ENV APP_MODE=production
  • Sets an environment variable APP_MODE with the value production.

2. Using ENV in Commands

You can reference the environment variable in subsequent Dockerfile instructions using $:

Dockerfile

ENV APP_HOME=/usr/src/app WORKDIR $APP_HOME
  • Sets APP_HOME to /usr/src/app.
  • Uses $APP_HOME in WORKDIR.

3. Passing Environment Variables to the Application

Dockerfile:
Dockerfile

FROM python:3.9-slim WORKDIR /app COPY app.py . ENV APP_MODE=production CMD ["python", "app.py"]
app.py:
python

import os app_mode = os.getenv("APP_MODE", "development") # Default to 'development' print(f"Application running in {app_mode} mode!")
Build and Run:
  1. Build the Image:

    bash

    docker build -t env_example .
  2. Run the Container:

    bash

    docker run env_example

    Output:

    arduino

    Application running in production mode!
  3. Override Environment Variables at Runtime:

    bash

    docker run -e APP_MODE=debug env_example

    Output:

    lua

    Application running in debug mode!

Using Multiple ENV Variables

Dockerfile

ENV DB_HOST=db.example.com \ DB_PORT=5432 \ APP_ENV=staging
  • Sets:
    • DB_HOST to db.example.com
    • DB_PORT to 5432
    • APP_ENV to staging

Best Practices

  1. Use ENV for Constants:
    • Use ENV for variables that don’t change often (e.g., paths, modes).
  2. Avoid Secrets in Dockerfiles:
    • Do not hardcode sensitive information like passwords or API keys. Use runtime options like docker run -e or a secrets manager.

Common Mistakes

  1. Overwriting Built-In Environment Variables:
    • Be cautious when overriding system variables (e.g., PATH).
  2. Not Quoting Variables:
    • Although optional, quoting values prevents issues with special characters:
      Dockerfile

      ENV APP_MODE="production"

 ADD: Copying Files with Extra Features

What Does ADD Do?

The ADD instruction copies files or directories from the host machine (build context) into the container. It is similar to COPY but with additional features:

  1. It can handle compressed files (e.g., .tar, .gzip) and automatically extract them.
  2. It allows copying files from a remote URL.

Syntax

Dockerfile

ADD [source] [destination]
  • source: File or directory path on the host (or a URL).
  • destination: Path inside the container.

Examples

1. Basic File Copy

Dockerfile:

Dockerfile

FROM python:3.9-slim WORKDIR /app ADD app.py . CMD ["python", "app.py"]
  • Copies app.py from the build context to /app in the container.

2. Handling Compressed Files

If ADD detects a compressed file, it automatically extracts it into the specified location.

Example Folder Structure:


my_project/ ├── app.tar.gz └── Dockerfile

Dockerfile:

Dockerfile

FROM python:3.9-slim WORKDIR /app ADD app.tar.gz . CMD ["python", "app.py"]
  • If app.tar.gz contains app.py, it will be extracted automatically.
  • The extracted content is placed in /app.

3. Remote URLs

Dockerfile:

Dockerfile

FROM python:3.9-slim WORKDIR /app ADD https://example.com/sample-data.json /app/sample-data.json CMD ["python", "process_data.py"]
  • Downloads sample-data.json from the given URL and saves it to /app/sample-data.json.

Best Practices

  1. Use COPY Over ADD When Possible:

    • If you don’t need features like decompression or URL handling, prefer COPY for clarity and simplicity.
  2. Avoid Using ADD for Remote URLs:

    • For better maintainability and security, use tools like curl or wget in a RUN command to fetch remote files.
    • Example:
      Dockerfile

      RUN curl -o /app/sample-data.json https://example.com/sample-data.json
  3. Compressed Files:

    • Extract files manually with RUN commands for greater control:
      Dockerfile

      COPY app.tar.gz /tmp RUN tar -xzf /tmp/app.tar.gz -C /app && rm /tmp/app.tar.gz
dd

10. VOLUME: Managing Persistent Data

What Does VOLUME Do?

The VOLUME instruction in a Dockerfile is used to create a mount point and designate a directory inside the container as a volume. Volumes allow you to persist data generated or used by a container, even after the container is removed.


Why Use VOLUME?

  1. Data Persistence:
    • Ensures that data in the specified directory is not lost when the container stops or is removed.
  2. Data Sharing:
    • Enables sharing data between containers or between the host and the container.
  3. Isolation:
    • Keeps application data separate from the container’s image layers.

Syntax

Dockerfile

VOLUME ["path_in_container"]
  • path_in_container: The directory inside the container that should be mounted as a volume.

Examples

1. Basic Example

Dockerfile

FROM python:3.9-slim WORKDIR /app COPY app.py . VOLUME ["/data"] CMD ["python", "app.py"]
  • What Happens?:
    • The directory /data inside the container is designated as a volume.
    • Any data written to /data persists even if the container is removed.

2. Using VOLUME with a Flask App

Dockerfile:

Dockerfile

FROM python:3.9-slim WORKDIR /app COPY app.py . RUN pip install flask VOLUME ["/app/logs"] CMD ["python", "app.py"]

app.py:

python

from flask import Flask app = Flask(__name__) @app.route("/") def home(): with open("/app/logs/access.log", "a") as f: f.write("Accessed home route\n") return "Hello, Flask with Volumes!" if __name__ == "__main__": app.run(host="0.0.0.0", port=5000)

How to Use Volumes

  1. Build and Run the Image:

    bash

    docker build -t volume_example . docker run -p 5000:5000 volume_example
  2. Inspect the Volume:

    • Docker automatically manages the /app/logs volume.
    • Find the volume’s location on the host by inspecting the container:
      bash

      docker inspect <container_id>
    • Look under the "Mounts" section.

Mounting a Host Directory

You can override the volume and bind it to a specific host directory using -v:

bash

docker run -p 5000:5000 -v $(pwd)/logs:/app/logs volume_example
  • Maps the logs directory on your host to /app/logs in the container.
  • All logs generated in the container will appear in your host’s logs directory.

HEALTH CHECK: Monitoring Container Health

What Does HEALTHCHECK Do?

The HEALTHCHECK instruction defines a command to test whether the container is functioning properly. Docker uses this information to determine the health status of the container, which can be:

  • healthy: The container is working as expected.
  • unhealthy: The health check failed.
  • starting: The container is still starting up.

Syntax

Dockerfile

HEALTHCHECK [OPTIONS] CMD command
  • OPTIONS:

    • --interval=<duration>: Time between health checks (default: 30s).
    • --timeout=<duration>: Maximum time a health check command is allowed to run (default: 30s).
    • --retries=<number>: Number of retries before marking the container as unhealthy (default: 3).
    • --start-period=<duration>: Grace period after container start before health checks begin (default: 0s).
    • --disable: Disables health checks.
  • CMD command:

    • Specifies the health check command to run inside the container.

Examples

1. Basic Health Check for a Web Server

Dockerfile

FROM python:3.9-slim WORKDIR /app COPY app.py . RUN pip install flask EXPOSE 5000 HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -f http://localhost:5000 || exit 1 CMD ["python", "app.py"]

Explanation:

  1. HEALTHCHECK:

    • Every 30 seconds (--interval=30s), Docker runs curl -f http://localhost:5000.
    • If the server is unreachable or returns an error, the health check fails (exit 1).
    • After 3 failed attempts (--retries=3), the container is marked as unhealthy.

ARG vs ENV: Managing Configuration

What is ARG?

  • The ARG instruction allows you to define variables that are available only at build time.
  • These variables are used to parameterize your Dockerfile and cannot be accessed after the image is built.

What is ENV?

  • The ENV instruction sets environment variables that are available at runtime.
  • These variables can be accessed by applications running inside the container.

ONBUILD: Trigger Commands in Child Images

What Does ONBUILD Do?

The ONBUILD instruction adds a trigger to a parent image. This trigger activates a specified instruction (e.g., COPY, RUN, etc.) whenever the image is used as a base for building a child image.


Why Use ONBUILD?

  1. Parent-Child Workflows:
    • Helps define "default actions" in a parent image that will be executed in the child Dockerfile.
  2. Reusability:
    • Simplifies Dockerfiles for child images by predefining common behaviors.

Syntax

Dockerfile

ONBUILD <instruction>
  • <instruction>: Any valid Dockerfile instruction (e.g., RUN, COPY, etc.).

How It Works

Parent Image (with ONBUILD):

Dockerfile

FROM python:3.9-slim ONBUILD COPY . /app ONBUILD RUN pip install -r /app/requirements.txt
  • When this image is used as a base in a child Dockerfile, the ONBUILD instructions (COPY and RUN) are triggered.

Child Image:

Dockerfile

FROM parent_image_with_onbuild WORKDIR /app CMD ["python", "app.py"]
  • When the child image is built:
    1. The COPY . /app instruction from the parent is executed.
    2. The RUN pip install -r /app/requirements.txt instruction from the parent is also executed.
    3. Additional instructions in the child Dockerfile are applied.

What Does LABEL Do?

The LABEL instruction allows you to attach metadata to a Docker image in the form of key-value pairs. This metadata can include information about the image, such as the author, version, description, and more.


Why Use LABEL?

  1. Image Documentation:
    • Helps describe the purpose, author, or version of the image.
  2. Automated Management:
    • Labels can be used by orchestration tools (e.g., Kubernetes, Docker Compose) for searching, filtering, or organizing images.
  3. Compliance:
    • Labels can store compliance data, such as licensing or build information.

Syntax

Dockerfile

LABEL <key>=<value> [<key>=<value>...]
  • key: The label name (e.g., author, version).
  • value: The metadata value.

Examples

1. Basic Labels

Dockerfile

FROM python:3.9-slim LABEL maintainer="Your Name <you@example.com>" LABEL version="1.0" LABEL description="This is a Flask app image"

2. Labels with Spaces

You can include spaces in label values by quoting them:

Dockerfile

LABEL description="This is a lightweight Flask application."

3. Multiple Labels in One Line

Dockerfile

LABEL maintainer="Your Name <you@example.com>" version="1.0" description="A lightweight Flask app."

Inspecting Labels

After building the image, you can inspect its labels using:

bash

docker inspect <image_id>

Example Output:

json

"Labels": { "maintainer": "Your Name <you@example.com>", "version": "1.0", "description": "This is a Flask app image" }

Comments