MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
docker
Search

Docker tutorial: Get started with Docker

Wednesday October 2, 2024. 11:00 AM , from InfoWorld
Docker tutorial: Get started with Docker
Containers are a lightweight way to make application workloads portable, like a virtual machine but without the overhead and bulk typically associated with VMs. With containers, you can package apps and services and move them freely between physical, virtual, or cloud environments.

Docker, a container creation and management system created by Docker Inc., takes the native container functionality found in Linux and makes it available to end-users through a command-line interface and a set of APIs.

Many common application components are now available as prepackaged Docker containers, making it easy to deploy stacks of software as decoupled components—an implementation of the microservices model. It helps to know how the pieces fit together from the inside out, though. In this guide, we’ll investigate how Docker works. We’ll start by looking at how to set up Docker across the Linux, Windows, and macOS platforms. Next, we’ll install an instance of the Apache web server in a Docker container. You’ll also learn how to work with Dockerfiles to automate Docker image builds.

Choose a Docker product

At its core, Docker uses an open source project, Docker Engine. It’s possible to install Docker Engine by itself and work with it directly from the command line, although only on Linux (or through WSL in Windows).

Your second option is Docker Desktop, a convenient GUI-based application for working with containers across multiple platforms. For developers working on Microsoft Windows, Docker Desktop is the most convenient solution.

The main consideration with Docker Desktop is its licensing. It’s free for individual, non-commercial open source, and educational use, but business use generally involves licensing fees, although the costs scale depending on the size of the organization.

Docker Desktop provides a convenient GUI for working with Docker, and includes an embedded console interface as well.
IDG

You can also obtain binary editions of the standalone Docker Engine for Windows, macOS, and Linux. However, you’ll have to perform the entire setup process manually, as Docker has more to it than just a binary artifact. The standalone binaries also don’t have any self-updating mechanism, and they may lack many of the features found in the full Docker product.

Using Docker with Linux

Docker started with Linux, as container technology relied on features in the Linux kernel. On Linux, you can use Docker’s core open source features directly in the form of Docker Engine. Setting up Docker Engine requires a different procedure for each major Linux distribution, but the same goes for setting up Docker Desktop on Linux. Once installed, Docker Desktop on Linux provides more convenient ways to manage a Docker setup than the command line alone.

Using Docker with Windows

On Windows, Docker Desktop can work in one of two modes: with Windows’s native Hyper-V virtualization system, or through a Linux instance in WSL2. Both back ends offer the same functionality, and both have the same hardware requirements: 64-bit CPU with SLAT support, at least 4GB RAM, and BIOS-enabled hardware virtualization support.

Of the two, WSL2 is the more lightweight and broadly available option. Hyper-V is more demanding and ships only with Windows 10 or 11’s Professional or Enterprise editions. Hyper-V provides more process isolation features as of Windows 11 and Windows Server 2022, but those may not be crucial features if you’re just starting out.

If you want to use another VM or hypervisor system to run Docker containers, like VMware, Docker only supports that in its business or enterprise editions.

Using Docker with macOS

Installing Docker Desktop on macOS works much the same as any other desktop application. Double-click the Docker.dmg file to open it, then drag the Docker icon inside to your Applications folder. It’s also possible to run the setup process from the command line.

Working with the Docker CLI

The docker command-line utility is where you’re likely to do most of your work with Docker. You can run docker from any console once it’s been properly installed, and view all available Docker commands by simply typing docker. For an up-to-date rundown of all commands, their options, and full descriptions, consult the official command-line client documentation.

When you have Docker set up, one of the first commands to run with it is docker info, which returns basic information about your Docker installation. The output shows the number of containers and images, along with other pertinent information. Note that it may be quite lengthy; this example shows only the last of several pages.

Partial output from the ‘docker info’ command, which includes many details about the current Docker installation and typically spans several console pages.
IDG

Docker commands on Linux
Note that on Linux you may need to preface docker commands with sudo. This advice applies to all the other command examples in this article.

The Docker Desktop client isn’t meant to replace the docker CLI, but to augment it. It gives you a convenient GUI to do most common day-to-day work with containers: running containers, examining installed images, inspecting created volumes, listing container image builds, and controlling Docker Desktop extensions. Docker Desktop also provides its own built-in console host to give you access to the console without having to switch away.

We’ll use the Docker CLI as the default way to interact with Docker.

Working with Docker containers and images

Docker containers are much more efficient than virtual machines. When a container is not running a process, it is completely dormant. You might think of Docker containers as self-contained processes—when they’re not actively running, they consume no resources apart from storage.

Containers require an image to run, and by default, no images are present in a Docker installation. If you want to run an image that isn’t present, it’ll have to be downloaded and added to the local image repository. You can download and add images to the image repository semi-automatically, as you’ll see in the next example.

Launching a container

Let’s say we want to launch a basic Ubuntu Linux Docker image and run the bash shell, we can use the following command:

docker run -i -t ubuntu /bin/bash

The output will look something like this:

PS C:Usersserda> docker run -i -t ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
dafa2b0c44d2: Pull complete
Digest: sha256:dfc10878be8d8fc9c61cbff33166cb1d1fe44391539243703c72766894fa834a
Status: Downloaded newer image for ubuntu:latest
root@16fd4752b26a:/#

This shows Docker fetching the ubuntu image and starting a container based on it. The last line is the prompt for the bash shell running in the container, where you can type commands. Note that any commands typed at that prompt will be run in the Docker image, not in the system at large.

Examining running containers

You can view active and inactive containers using the docker ps command. (Remember to run this from your actual system console, not the above prompt that’s actually running inside a container.) If you use docker ps -a, it will show all containers on the system regardless of their status; docker ps alone will show only the running containers.

The output for docker ps may look something like this:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16fd4752b26a ubuntu '/bin/bash' 25 minutes ago Up 25 minutes stoic_gould

Each running container has an ID associated with it—here, it’s the string beginning with 16fd—plus information about which image was used to create it, and a friendly name for the container (here, stoic_gould). The friendly name can be manually assigned with docker run‘s --name switch, or you can assign one randomly on startup.

Pulling containers

When we ran docker run, it automatically pulled an Ubuntu container image from the Docker Hub registry service. Most of the time, though, you’ll want to pull container images into the local cache ahead of time, rather than do that on demand. To do so, use docker pull, like this:

docker pull ubuntu

A full, searchable list of images and repositories is available on the Docker Hub.

Docker images vs. containers

Something worth spelling out at this point is how images, containers, and the pull/push process all work together.

Docker containers are built from images, which are essentially shells of operating systems that contain the necessary binaries and libraries to run applications in a container.

Images are labeled with tags, essentially metadata, that make it easy to store and pull different versions of an image. Naturally, a single image can be associated with multiple tags: ubuntu:16.04, ubuntu:xenial-20171201, ubuntu:xenial, ubuntu:latest.

The command docker pull ubuntu, which we saw earlier, pulls the default Ubuntu image from the Ubuntu repository, which is the image tagged latest. In other words, the command docker pull ubuntu is equivalent to docker pull ubuntu:latest.

Note that if I had typed docker pull -a ubuntu, I would have pulled all images (the -a flag) in the Ubuntu repository into my local system. This would be convenient if I wanted to work with a variety of Ubuntu images without having to fetch each individually, but it would take up a lot of space locally.

Most of the time, though, you will want either the default image or a specific version. For example, if you want the image for Ubuntu Saucy Salamander, you’d use docker pull -a ubuntu:saucy to fetch the image with that particular tag from that repo.

The same logic behind repos and tags applies to other image manipulations. If you pulled saucy, as in the above example, you would run it by typing docker run -i -t ubuntu:saucy /bin/bash. If you typed docker image rm ubuntu, to remove the ubuntu image, it would remove only the image tagged with latest. To remove images other than the default, such as Ubuntu Saucy, you must include the appropriate tag: docker image rm ubuntu:saucy.

Docker image and container workflow

Once you’ve pulled an image, you start a live container using the image’s contents by executing the docker run command.

Images are immutable. They are not changed when you run a container; the container starts off as essentially a copy of what’s in the image, and any changes that take place are lost when the container is terminated.

If you want to make changes to the image, you can do this in a couple of ways. You can modify the image’s Dockerfile and build a new image using those changes. Or, you can make changes inside the running container, and create a new image incorporating those changes with the docker commit command. In either case, you’re not modifying the original image, but creating a new one with the changes.

It’s important to note that Docker only stores the deltas, or changes, in images built from other images. As you build your own images, only the changes you make to the base image are stored in the new image, which links back to the base image for all its dependencies. Thus, you can create images that have a virtual size of 266MB but take up only a few megabytes on disk.

Fully configured containers can then be pushed up to a central repository to be used elsewhere in the organization or even shared publicly. In this way, an application developer can publish a public container for an app, or you can create private repositories to store all the containers used internally by your organization.

Create a new Docker image from a container

Now that you have some understanding of how images and containers work, let’s set up an Apache web server container and make it permanent.

Build a new Docker container

First, you need to build a new container. There are a few ways to do this, but because you have a few commands to run, start a root shell in a new container:

docker run -i -t --name apache_web ubuntu /bin/bash

This creates a new container with a unique ID and the name apache_web. It also gives you a root shell because you specified /bin/bash as the command to run. Now, install the Apache web server using apt-get:

apt-get install apache2

Note that you don’t need to use sudo, because you’re running as root inside the container. Note that you do need to run apt-get update, because, again, the package list inside the container is not the same as the one outside of it. (The other instructions inside the container do not require sudo unless explicitly stated.)

The normal apt-get output appears, and the Apache2 package is installed in your new container. Once the install has completed, start Apache, install curl, and test the installation, all from within your container:

service apache2 start
apt-get install curl

curl

If you were doing this in a production environment, you’d next configure Apache to your requirements and install an application for it to serve. Docker lets directories outside a container be mapped to paths inside it, so one approach is to store your web app in a directory on the host and make it visible to the container through a mapping.

Create a startup script for a Docker container

Remember that a Docker container runs only as long as its process or processes are active. So if the process you launch when you first run a container moves into the background, like a system daemon, Docker will stop the container. Therefore, you need to run Apache in the foreground when the container launches, so that the container doesn’t exit as soon as it fires up.

Create a script, startapache.sh, in /usr/local/sbin:

apt-get install nano

nano /usr/local/sbin/startapache.sh

(You don’t have to use the nano editor to do this, but it’s convenient.)

The contents of startapache.sh:

#!/bin/bash. /etc/apache2/envvars
/usr/sbin/apache2 -D FOREGROUND

Save the file and make it executable:

chmod +x /usr/local/sbin/startapache.sh

All this small script does is bring in the appropriate environment variables for Apache and start the Apache process in the foreground.

You’re done modifying the contents of the container, so you can leave the container by typing exit. When you exit the container, it will stop.

Commit the container to create a new Docker image

Now you need to commit the container to save the changes you’ve made:

docker commit apache_web local:apache_web

The commit will save your container as a new image and return a unique ID. The argument local:apache_web will cause the commit to be placed in a local repository named local with a tag of apache_web.

You can see this by running the command docker images:

REPOSITORY TAG IMAGE ID CREATED SIZE
local apache_web 540faa63535d 24 seconds ago 233MB
ubuntu latest b1e9cef3f297 4 weeks ago 78.1MB

Note that the exact details of your image—the image ID and the size of the container—will be different from my example.

Docker networking basics

Now that you have your image, you can start your container and begin serving pages. Before you do that, let’s discuss how Docker handles networking.

Docker can create various virtual networks used by Docker containers to talk to each other and the outside world:

bridge: This is the network that containers connect to by default. The bridge network allows containers to talk to each other directly, but not to the host system.

host: This network lets containers be seen by the host directly, as if any apps within them were running as local network services.

none: This is essentially a null or loopback network. A container connected to none can’t see anything but itself.

Other network drivers also exist, but these three are most crucial for starting out.

When you want to launch a container and have it communicate with both other containers and the outside world, you need to manually map ports from that container to the host. For the sake of my example, you can do this on the command line when you launch your newly created container:

docker run -d -p 8080:80 --name apache local:apache_web /usr/local/sbin/startapache.sh

The -p switch is used for port mapping. Here, it maps port 8080 on the host to port 80 inside the container.

Once you run this command, you should be able to point a web browser at the IP address of the host and see the default Apache web server page.

You can see the status of the container and the TCP port mappings by using the docker ps command:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
81d8985d0197 local:apache_web '/usr/local/sbin/sta…' 13 minutes ago Up 12 minutes 0.0.0.0:8080->80/tcp apache

You can also look up the network mappings by using the docker port command, in this case docker port apache

80/tcp -> 0.0.0.0:8080

Note that you could use the -P option on the docker run command to publish all open ports on the container to the host and map a random high port such as 49153 back to port 80 on the container. This can be used in scripting as necessary, but it’s generally a bad idea to do this in production.

At this point, you have a fully functional Docker container running your Apache process. When you stop the container, it will remain in the system and can be restarted at any time via the docker restart command.

Use Dockerfiles to automate Docker image builds

As educational as it is to build Docker containers manually, it is pure tedium to do this repeatedly. To make the build process easy, consistent, and repeatable, Docker provides a form of automation for creating Docker images called Dockerfiles.

Dockerfiles are text files, stored in a repository alongside Docker images. They describe how a specific container is built, letting Docker perform the build process for you automatically. Here is an example Dockerfile for a minimal container, much like the one I built in the first stages of this demo:

FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y curl
ENTRYPOINT ['/bin/bash']

If you save this file as dftest in your local directory, you can build an image named ubuntu:testing from dftest with the following command:

docker build -t ubuntu:testing - < dftest

In PowerShell, you’d use this command:

cat.dftest | docker build -t ubuntu:testing -

Docker will build a new image based on the ubuntu:latest image. Then, inside the container, it will perform an apt-get update and use apt-get to install curl. Finally, it will set the default command to run at container launch as /bin/bash. You could then run:

docker run -i -t ubuntu:testing

Et voilà! You have a root shell on a new container built to those specifications. Note that you can also launch the container with this command:

docker run -i -t dftest

Numerous operators are available to be used in a Dockerfile, such as mapping host directories to containers, setting environment variables, and even setting triggers to be used in future builds. See the Dockerfile reference page for a full list of Dockerfile operators.

Next steps with Docker

There’s much more to Docker than we’ve covered in this guide, but you should have a basic understanding of how Docker operates, a grasp of the key Docker concepts, and enough familiarity to build functional containers. You can find more information on the Docker website including an online tutorial that goes into more granular detail about Docker features.
https://www.infoworld.com/article/2176798/docker-tutorial-get-started-with-docker.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Oct, Wed 16 - 18:19 CEST