Docker

From DWAMconsult Wiki
Jump to navigation Jump to search

Installation

  • Install Docker and Docker Compose, using either version available in existing repo apt install docker docker-composeor use instructions:
  • View version installed: docker --version
  • View client and server stats: docker version
    • Ensure Docker daemon running e.g. systemctl enable docker --now
  • View container status: docker info
  • Add user to Docker group to avoid running as super user /usr/sbin/usermod -a -G docker <user>

Advantages over larger, 'monolithic' applications

  • If app becomes overloaded, only have to scale services you need to
  • Fault tolerant: single error in one container should not bring down entire app
  • Easier to maintain, update & change than large monolithic apps because everything broken down into smaller components
  • Portable

Overview

  • Containers share host (aka node) resources (kernel, CPU, RAM etc) - less costly than separate VM with own full-blown OS
    • Since all Linux distros share the kernel, can containerize Linux OS but not Windows
    • To run Linux containers on Windows, must use underlying Linux VM or Windows Subsystem for Linux
  • Evolution of kernel namespaces, control groups, union filesystems, LXC, 'jails' in BSD etc (e.g. Chroot (jails))
  • libcontainer (cross-platform) replaced LXC as default execution driver
  • Open Containers Initiative defines standards for images & container runtime spec
  • Runtime (runc low level, containerd high level), daemon (engine), Orchestrator
  • Low-level runtime: runc. Starts & stops containers, interfaces with host OS. Reference implementation of OCI runtime spec
  • High-level runtime: containerd. Pulls images, creates network interfaces, manages runc instances
  • Docker daemon: dockerd. Exposes Docker remote APIs, manages images & volumes, networks etc
    • Local API endpoint: /var/run/docker.sock
    • Can also be configured to communicate over network (2375/tcp), not secure by default
  • Orchestration: clusters of nodes running Docker called swarms. Kubernetes often preferred over Docker Swarm

Images

  • Image: object containing OS, file system, application & application dependencies (similar to VM template)
    • Is a combination of layers (base layer, app layer, dependency layer etc)
    • Each image gets a unique ID and a unique name
    • Storage driver stacks layers & presents them as single filesystem/image e.g. AUFS, overlay2, devicemapper, btrfs, zfs
    • Images can share layers - efficiency
    • Local image repository: /var/lib/docker/<storage-driver>
    • Once all containers using an image have been stopped & destroyed, you are able to remove the image
  • Official Docker images: https://hub.docker.com/
Command Description
docker image ls
docker image ls -a
docker images
docker image ls -aq
Show images
Show all images
Show all images
Show all image IDs only
docker search wordpress
docker search wordpress --filter is-official=true
docker search php -f is-official=true
Search for WordPress images
Search only for official versions
docker image pull ubuntu:latest Pull down an image from Dockerhub (an image 'registry')
docker image pull gcr.io/google-containers/git-sync:v3.1.5
docker image pull mcr.microsoft.com/powershell:latest
Pull images from specified registry
docker image inspect <image> Inspect all details of image
docker image rm <image:version or id> Remove (containers using it must be stopped first)
docker manifest inspect <image> Inspect the contents of this image
docker image history <image> View image history
docker image ls --filter dangling=true Find dangling images i.e. image without tag - appears as <none>:<none>
docker image prune
docker image prune -a
Remove dangling images
Remove dangling & unused images

Containers

  • Runtime instance of an image. If an image is a 'template', container is the 'virtual machine'
  • Searches local image repository to see if it already has a copy of the image. If not, it downloads it from Docker Hub
  • If you create files on a container they persist if you stop the container but not if you delete it (unless using volumes)
Command Description

docker [container] ps -a
Show running containers
Show all containers including those in stopped state
docker [container] run -it ubuntu /bin/bash Runs a container using latest version of Ubuntu
-it makes it interactive. Places you at the bash prompt
If you type exit the container will stop
To switch back to host machine without stopping the container use Control + PQ
docker run -d -p 8080:80 docker/getting-started -d runs detached, -p specifies the host-to-Docker port mapping
View in browser with http://localhost:8080
docker [container] start <id> Start the container
docker [container] stop <id> Stop the container
docker [container] exec -t <id> <command> -t reattach to container(optional)
Can execute a command on container without attaching to it
docker [container] inspect <id> Examine details of container, including the default app that it is set to run (cmd section)
docker [container] rm <id> Remove container. Use -f to force
docker [container] rm $(docker [container] ps -aq) -f Removes all containers on your host
docker logs [container id] Shows logging including console.log output, Apache logs, etc
docker kill [container id] Stop & remove container in one command
docker container rm $(docker container list -aq) -f Remove all containers

Restart policy

  • Define in the docker-compose file
Policy Explanation
always A stopped container will always be restarted unless explicitly stopped (e.g. docker contain stop)
It will also be restarted next time the Docker daemon starts (restart of daemon or reboot of host and daemon is set to start automatically)
Example: docker container run --name mycontainer -it --restart always /bin/bash
unless-stopped Like above, but container will not be restarted when the daemon restarts if it was in a stopped state before the daemon restarted
on-failed Restarts container if it fails with non-zero exit code & will always restart containers when Docker daemon restarts

Containerizing an app

  • Create application code & dependencies
  • Create Dockerfile describing app, dependencies and how to run it
  • Feed Dockerfile into docker image build command
  • Optionally, push new image to a registry
  • Run container from the image

Finding an image and customizing it

  • First step, download an image e.g. docker image pull php:8.1-fpm
  • Run the image in a container: docker container run -it php:8.1-fpm bash and look at what is currently configured. Install extensions, see where config files are. Write down the commands you use to configure this image, will use them in the Dockerfile
    • View currently installed extensions: php -m or php -i | less
    • View php.ini and configurations under conf.d in /usr/local/etc/php
    • There are two php.ini template files there, one for dev and one for prod. Edit/copy the one you want to use using mv php.ini-xyz php.ini
    • Install extensions: docker-php-ext-install mysqli Verify it is enabled in the conf.d directory
    • Create custom configurations with new .ini files in the conf.d directory
    • Install using PECL: pecl install apcu then docker-php-ext-enable pecl
    • Install dependencies: apt install libonig-dev libzip-dev (allows for installing mbstring)
  • Easier way: use Dockerfile & add install-php-extensions script which automatically installs dependencies
FROM php:8.1-fpm

ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/

RUN chmod +x /usr/local/bin/install-php-extensions && \
    install-php-extensions gd xdebug opcache mysqli mbstring && \
    pecl install apcu && docker-php-ext-enable apcu

Dockerfile

FROM php:8.1-fpm

ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/

RUN chmod +x /usr/local/bin/install-php-extensions && \
    install-php-extensions gd xdebug opcache mysqli mbstring && \
    pecl install apcu && docker-php-ext-enable pecl

WORKDIR /var/www/html

EXPOSE 9000
  • Note that each step adds a 'layer'. FROM includes the Linux base layer; RUN installs dependencies in a new layer atop the base layer; COPY and RUN are additional layers etc. WORKDIR sets the working directory inside the image file system, it does not add an additional layer. ENTRYPOINT sets main app the container built from this image should run - this is also metadata layer (not image layer)
  • Commands are chained together to avoid multiple run commands (each separate run command creates another layer)
  • Build: docker image build -t php-app:version . (-t applies a tag)
  • Verify image exists in local repo: docker image ls
  • Launch:
docker container run -d \
--name web1 \
--publish 8080:8080 \
web:latest
  • Then launch browser & enter localhost:8080
  • After changing code or modifying Docker file, build the image again
  • Need to add this config for PHP-FPM
    • If PHP-FPM container is separate from Apache container:
  <FilesMatch \.php$>
    SetHandler "proxy:fcgi://php-fpm-container:9000"
  </FilesMatch>
    • If PHP-FPM and Apache on same container or Apache is on host & PHP-FPM is in a container:
 <FilesMatch \.php$>
    SetHandler "proxy:fcgi://localhost:9000"
  </FilesMatch>
  • For multiple versions of PHP-FPM, just change the port number

Push (upload, share) image

  • Log in to Docker account (create one if don't have one) docker login
  • Tag image for Docker Hub e.g. docker image tag web:latest <Docker Hub username>/web:latest
  • Now when you run docker image ls you will see image has two tags, one of them being <Docker Hub username>/web for your image on Docker Hub, the other being local
  • Push it (upload to Docker Hub): docker image push <Docker Hub username>/web:latest
  • Can now access this image over the internet from anywhere
  • Research how to push to other registries e.g. AWS Elastic Container Repo

Multi-stage builds

FROM node:latest AS storefront
WORKDIR /usr/src/atsea/app/react-app
COPY react-app .
RUN npm install
RUN npm run build

FROM maven:latest AS appserver
WORKDIR /usr/src/atsea
COPY pom.xml .
RUN mvn -B -f pom.xml -s /usr/share/maven/ref/settings-docker.xml dependency:resolve
COPY . .
RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package -DskipTests

FROM java:8-jdk-alpine
RUN adduser -Dh /home/gordon gordon
WORKDIR /static
COPY --from=storefront /usr/src/atsea/app/react-app/build/ .
WORKDIR /app
COPY --from=appserver /usr/src/atsea/target/AtSea-0.0.1-SNAPSHOT.jar .
ENTRYPOINT ["java", "-jar", "/app/AtSea-0.0.1-SNAPSHOT.jar"]
CMD ["--spring.profiles.active=postgres"]
  • Defines three build stages - 'storefront', 'appserver', 'production'
  • COPY --from only copies production-related code from images built by previous two stages
  • The eventual image produced is much smaller with intermediate larger containers removed: docker image build -t multi:stage .
  • Try to write Dockerfiles in a way that places instructions that are likely to invalidate the cache towards the end of the Dockerfile. This means that a cache-miss will not occur until later stages of the build - allowing the build to benefit as much as possible from the cache
    • Use --no-cache=true in the docker image build command to force ignore local cache
    • Use --squash if you want to force image to not share layers with other images (all the layers get 'squashed' into one)
  • Recommended to use no-install-recommends when using apt-get install to add dependencies to your builds to reduce unwanted packages in your images

Docker Compose

  • Microservices: a web front end, ordering system, catalogue, backend database, logging, authentication & authorization - working together called a 'useful application'. Docker Compose is a single-engine approach to managing all these services & make them work together
  • Docker Compose is an external Python binary that has to be installed separately from Docker itself
  • Define your app in a YAML file & pass it to the docker-compose command
  • apt install docker-compose etc
  • sudo chmod +x /usr/local/bin/docker-compose
  • Check installed version: docker-compose --version

Compose files

  • Uses YAML files (but can also use JSON)
  • Default name of file is docker-compose.yml (use -f to specify other file names)
  • Example:
version: "3.8"
services:
  web-fe:
    build: .
    command: python app.py
    ports:
      - target: 5000
        published: 5000
    networks:
      - counter-net
    volumes:
      - type: volume
        source: counter-vol
        target: /code
  redis:
    image: "redis:alpine"
    networks:
      counter-net:

networks:
  counter-net:

volumes:
  counter-vol:
  • Explanation:
    • Services defines app microservices. In this case two - web frontend and in-memory database, each deployed in separate container
    • Networks: Compose by default creates bridge networks, connecting containers on same Docker host. Can use driver property to specify different network types
    • Volumes: used for creating persistent storage
    • Build: tells Docker to build new image using Dockerfile instructions in current directory .
    • Command: tells Docker to run python app.py as main app in the container
    • Ports: Tell docker to map port 5000 inside container to port 5000 on host
    • Networks: network should already exist or be defined in networks top-level key
    • Volumes: Tells Docker to mount counter-vol to /code inside container - counter-vol needs to already exist or be defined in volumes top-level key at bottom of file

Example

  • git clone https://github.com/nigelpoulton/counter-app.git
  • cd counter-app
  • docker-compose up & - use -d to bring it up in the background
  • http://localhost:5000
  • docker image ls shows it created three images (the web frontend, Python & redis)
  • docker container ls will show that it started two containers (web frontend & redis)
  • Stop the containers: docker-compose down
    • Note that counter-vol doesn't get deleted: volumes intended to store persistent data
  • Bring back up: docker-compose up -d - will be much quicker
  • List processes inside each container with docker-compose top
  • docker-compose stop means stop the app without deleting its resources. Removed stopped Compose app with docker-compose rm
  • Restart stopped app with docker-compose restart
  • Editing the app.py file on the host directory. Then find where volume exposted docker volume inspect counter-app_counter-vol | grep Mount & copy updated file to that mount point, results in updates being reflected immediately on the container (after refreshing browser)
  • Treat Compose files as source code - put them in repo, version control etc

Swarm

  • Clustering Docker hosts & orchestrating microservices apps
  • Kubernetes more popular
  • Clusters made of up nodes

Links