Mastering Docker: A Step-by-Step Guide to Creating and Running Docker Images

Nonni World
0

Introduction

Docker creates packaged applications known as containers. Each container offers an isolated environment that resembles a virtual machine (VM). However, unlike VMs, Docker containers do not run a complete operating system. Instead, they share the host's kernel and utilize software-level virtualization.





What is Docker?

Docker has become a standard tool for software developers and system administrators. It provides a convenient method to quickly launch applications without affecting the rest of your system. You can initiate a new service with a simple docker run command.

Containers encapsulate everything required to run an application, including OS package dependencies and your own source code. You specify the steps for creating a container using instructions in a Docker file. Docker utilizes the Docker file to build an image.

Images define the software available within containers. This concept is somewhat similar to starting a VM using an operating system ISO. If you create an image, any Docker user will be able to launch your application using the docker run command.


Understanding Containers and Dockers, How it works?

Containers leverage features of the operating system's kernel to create partially virtualized environments. You can even create containers from scratch using commands like chroot, which allows you to start a process with a specified root directory instead of the system's root directory. However, directly using kernel features can be tricky, insecure, and prone to errors.

Docker offers a comprehensive solution for the production, distribution, and utilization of containers. Modern versions of Docker consist of several independent components:

Docker CLI: This is the command-line interface that you interact with in your terminal. It sends commands to the Docker daemon.

Docker Daemon: This can operate locally or on a remote host and is responsible for managing containers and the images from which they are created.

Container Runtime: This component invokes kernel features to actually launch the containers. Docker is compatible with runtimes that follow the OCI specification, an open standard that ensures interoperability among various containerization tools.

When you're just starting out, you don't need to delve deeply into the inner workings of Docker. Simply installing Docker on your system will provide you with everything necessary to build and run containers.


Why We Need Docker?

Containers have gained immense popularity due to their ability to address numerous common challenges in software development. Their key feature is the capability to containerize once and run everywhere, which significantly narrows the gap between development environments and production servers.


Benefits of Using Containers

Consistency Across Environments: Containers ensure that every environment is identical, providing confidence in deployment.

Easy Onboarding: New team members can quickly set up their development instance with a simple docker run command.

Seamless Deployment: When launching a service, the Docker image used can be deployed directly to production, ensuring that the live environment matches the local instance exactly. This eliminates the frustrating "it works on my machine" problem.


Advantages Over Virtual Machines

Lightweight: Unlike traditional virtual machines, which are general-purpose tools designed for diverse workloads, containers are lightweight and self-sufficient. They are particularly well-suited for temporary or throwaway use cases.

Performance Efficiency: Since containers share the host's kernel, they have a minimal impact on system performance.

Rapid Launch Times: Containers start almost instantaneously because they only initiate processes without the need to boot an entire operating system.

Containers provide a streamlined and efficient approach to software development and deployment, making them a preferred choice for many developers.


Let's Start

Docker is compatible with all popular Linux distributions. Additionally, it operates on Windows and macOS. To get Docker up and running, follow the setup instructions specific to your platform.

To confirm that your installation is functioning properly, you can run a simple container with the following command:





This command will initiate a new container using the basic "hello-world" image. The image will display some output that provides information on how to use Docker. After the output is shown, the container will exit, returning you to your terminal.




Creating Images

After successfully executing hello-world, you can start creating your own Docker images. A Docker file outlines how to run your service by specifying the necessary software installations and including essential files. Here’s a straightforward example using the Apache web server:


FROM httpd:latest
RUN echo "LoadModule headers_module modules/mod_headers.so" >> /usr/local/apache2/conf/httpd.conf
COPY .htaccess /var/www/html/.htaccess
COPY index.html /var/www/html/index.html
COPY css/ /var/www/html/css


The FROM line establishes the base image. In this instance, we are using the official Apache image. Docker will execute the subsequent instructions in your Dockerfile on top of this base image.


The RUN stage executes a command within the container. This can be any command that exists in the container's environment. In this case, we are enabling the headers Apache module, which may be utilized by the .htaccess file to configure routing rules.


The final lines transfer the HTML and CSS files from your working directory into the container image. Now, your image comprises everything necessary to operate your website.


To build the image, run the following command.



Docker will utilize your Docker file to assemble the image. You will see output in your terminal as Docker processes each of your instructions.




The -t option in the command allows you to tag your image with a specific name (my-website:v1). This simplifies future references. Tags consist of two parts, separated by a colon: the first part is the image name, and the second typically indicates its version. If you do not include the colon, Docker will automatically use latest as the default tag version.


The . at the end of the command instructs Docker to utilize the Docker file located in your local working directory. This also establishes the build context, enabling you to utilize files and folders from your working directory with COPY instructions in your Docker file.


After creating your image, you can launch a container using the following command:




In this command, we are utilizing several additional flags with docker run:

The -d flag allows the Docker CLI to detach from the container, enabling it to run in the background.


The -p flag defines a port mapping, linking port 8080 on your host to port 80 in the container. You should be able to see your web page by visiting localhost:8080 in your web browser.


Docker images are constructed from layers. Each instruction in your Docker file generates a new layer. You can leverage advanced building features to reference multiple base images, while also eliminating intermediary layers from prior images.


Image Registries

Once you have created an image, you can push it to a registry. Registries serve as centralized storage, enabling you to share containers with others. The default registry is Docker Hub.

Image Availability Check

When you execute a command that references an image, Docker first checks if the image is available locally. If it is not found, Docker will attempt to pull it from Docker Hub. You can manually pull images using the following command:



Publishing an Image

To publish an image, follow these steps:

1. Create a Docker Hub account.
2. Run the command to log in:



Enter your username and password when prompted.
3. Tag your image with your Docker Hub username:




4. Now, you can push your image to the registry:




Once pushed, other users will be able to pull your image and start containers using it.

If you require private image storage, you can run your own registry. Additionally, several third-party services offer Docker registries as alternatives to Docker Hub.


Managing Your Containers

The Docker CLI offers various commands to effectively manage your running containers. Below are some of the most essential commands to be familiar with:

Listing Containers

. Use docker ps to display all currently running containers.

. To see stopped containers as well, append the -a flag.


Example Output of docker ps Command




Stopping and Starting Containers

. To stop a container, execute:



Replace my-container with the actual container's name or ID, which you can obtain from the ps command.


. To restart a stopped container, use:




Container Lifecycle

. Containers typically run as long as their main process remains active.

. Restart policies determine the behaviour when a container stops or when the host restarts. Use the flag --restart always with docker run to ensure a container restarts immediately after stopping.


Getting a Shell

. To execute a command within a container, use:




. For interactive access, add the -it flag to open a shell:



Monitoring Logs

Docker automatically gathers output from a container's standard input and output streams. To view a container's logs, run:



. Use the --follow flag to set up a continuous log stream for real-time monitoring.

Example Output of docker logs my-container Command




Container Orchestration

In production environments, Docker is typically not utilized in its standalone form. Instead, orchestration platforms like Kubernetes or Docker Swarm mode are more commonly employed. These tools are specifically designed to manage multiple container replicas, enhancing both scalability and reliability.





Docker represents just one element within the larger containerization movement. Orchestrators leverage the same container runtime technologies to create an environment that is more suitable for production use. By utilizing multiple container instances, these platforms enable features such as rolling updates and distribution across different machines, which significantly increases your deployment's resilience to changes and outages. In contrast, the standard Docker CLI operates on a single host and manages individual containers.


A Powerful Platform for Containers

Docker provides all the essential tools necessary for working with containers and has emerged as a vital asset in both software development and system administration. The main advantages include enhanced isolation and portability for individual services.

To effectively utilize Docker, it's important to familiarize yourself with the fundamental concepts of containers and images. This knowledge allows you to create customized images and environments that effectively containerize your workloads.

Post a Comment

0 Comments
Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Ok, Go it!
To Top