Learn Docker – interface operation (Docker Desktop)

Docker Desktop is a tool for developing, building, and containerizing applications in a desktop environment. Available for Windows and Mac operating systems, it allows developers to easily create and run containers in their local environment and interact with Docker Hub and other container registries. Docker Desktop integrates Docker Engine, Docker Compose and Docker CLI tools to make it easier for users to create and manage Docker containers.

  • Docker Engine, the core component of Docker, is a lightweight containerization technology that can run containers on a single host or in a cloud environment. It allows applications to be packaged, distributed and run in containers, making application deployment and management simpler and more reliable. Docker Engine includes the Docker daemon and Docker CLI, which can be used together to build, run and manage Docker containers.
  • Docker Compose is a tool for defining and running multi-container Docker applications. It uses YAML files to configure an application’s services and start and stop containers with a single command. Docker Compose allows developers to easily manage the various components of an application (such as databases, web servers, and applications) together without having to manually create, run, and connect individual containers. Using Docker Compose can help developers quickly establish an environment and improve deployment speed and development efficiency.
  • Docker CLI (Command Line Interface) is a command line tool used to interact with the Docker engine. It provides a set of commands and options that can be used to manage Docker containers, images, networks, and data volumes. Docker CLI is one of the most basic and commonly used tools in the Docker ecosystem. It can easily create, run, stop and delete Docker containers, as well as build, push and pull Docker images through the command line.

This article introduces some operations on Docker Desktop.

Multi-container application

Docker provides a tool: Docker Compose, which can start multiple containers with one command.
Next, we will use an example to demonstrate the use of Docker Compose.

  1. Pull the multi-container-app project from Git
git clone https://github.com/docker/multi-container-app

This is a simple todo app built using ExpressJS and Node. All todos are saved in the MongoDB database.

  1. compose.yaml configuration file
    There is a compose.yaml in the directory copied from Git. This file tells Dockers how to run the application.

The contents of the file are as follows:

  1. Run application
docker compose up -d

Run from the project path, this command will build and run all services listed in the compose file.

  • -d is used to tell docker compose to run in detached mode
  1. View the effect on the front end
    In Docker Destop, you can see that two containers, todo-app and todo-database, are running.
    Select link localhost:3000

http://localhost:3000/

  1. delete
    Storing the configuration in a Compose file has the added advantage of making it easy to delete everything and restart.

Just select the application stack and select Delete on Docker Desktop. When you want to restart, run docker compose up in the project folder again. This will restart the application. Note that when the database container is deleted, any todos created are also lost.

Insert image description here

Persistent container data (volumn)

Docker isolates all content, code, and data within the container from the local file system. This means that when a container is deleted in Docker Desktop, all contents within it will be deleted.
Sometimes, you may want to persist data generated by a container. Volumes can be used at this point.

The multi-container-app example above is used here.
If you want to retain data after deleting the container, you can use volumes. A volume is a location in the local file system, managed by Docker.

To add a volume to the project, simply go to the compose.yaml file and uncomment the following line:

todo-database:
    #...
    volumes:
      - database:/data/db
                      
#...
volumes:
  database:

The volumes element nested within todo-database tells Compose to mount a volume named database at /data/db in the todo-data service’s container.
The top-level volume element defines and configures a database called a volume, which can be used by any service in the Compose file.
Now, no matter how often the container is deleted and restarted, the data is persistent and can be accessed by any container on the system by mounting the database volume. Docker will check for a volume and create one if there is none.
Run this application using the docker compose-up command in the project directory.

docker compose up -d

Now, when developing applications on your local system, you can take advantage of the container’s environment. Any changes made to the application on the local system are reflected in the container. In your local directory, open app/views/todos.js in an IDE or text editor, update the Enter your task string, and save the file. Visit or refresh localhost:3001? to see the changes.

Access local folders from containers

Docker isolates all content, code, and data within the container from the local file system.
Sometimes, you may want a container to access a directory on your system. This is when you use bind loading.

git clone https://github.com/docker/bindmount-apps

If you want to access data on your system, you can use bind mounts. Bind mounts allow you to share a directory from the host’s file system to a container.

To add bind loading to this project, simply go to the compose.yaml file and uncomment the following line:

todo-app:
    #...
    volumes:
      - ./app:/usr/src/app
      - /usr/src/app/node_modules

The volumes element tells Compose to mount the local folder/application to /usr/src/app in the todo application service container. This particular binding loads the static content of the /usr/src/app directory in the container and creates what is called a development container. The second directive /usr/src/app/nod_module prevents the bind loader from overwriting the container’s node_modules directory to preserve the packages installed in the container.

Containerization of applications

When working with containers, you typically create a Dockerfile to define the image and a compose.yaml file to define how to run the image.
To help you create these files, Docker has a command called docker init. Run this command in the project folder and Docker will create all the files needed.

docker init

Docker will detect the language of your project and prompt you to select a language. You can select the language if it is in the list, or select “Other” if the language is not in the list.

docker init will walk through several issues in order to configure the project with sensible defaults

Once all questions are answered, you can run docker compose to run your project.

However, the Dockerfile and compose.yaml files created for the project may require additional changes. In this case, you may want to look for Dockerfile References and Compose File References in our documentation.

Publish image

1. Log in to Docker

  1. Rename image
docker tag docker/welcome-to-docker YOUR-USERNAME/welcome-to-docker
  1. Publishing to Docker Hub
    In the Action column of the image, select the Show Image action icon, click it and select “Push to Hub” from the pop-up menu.

  2. Enter the address of Dockers Hub to see the published image. https://hub.docker.com/