When working with various services, there is often a question that running applications can affect each other's work, and for services it is especially critical. As they provide clients with query processing, for example database server, performance monitoring server and others can be disabled by neighbouring processes as they have the same rights.
Therefore, to ensure the integrity, availability and isolated operation of services began to use the technology of containerisation. In this article we will look at deploying a PostgreSQL database through Docker containers.
Brief summary
Docker is a platform that automatically deploys pre-prepared file system snapshots or images and allows you to use them in isolated environments - containers.
The virtualisation itself takes place through the OS kernel, where a completely separate file, network, processor environment is created and assigned to a new process. This means that a limited Docker process with attached, new environments will perform all the necessary tasks of the service.
Installation and configuration
Let's start with the dependencies and software needed for the service, if you don't have Docker pre-installed, then run the command:
After that we can start the dockerd daemon and check its functionality with commands, then find the required image in the repository:
Note that you need to install the official container image, there is a checkbox on the right for this. Let's download the found image with the database server via the command:
This will download from remote to local repository, with which docker engine will further work. Let's check our image in the local repository with the command:
Since the container shares one OS kernel with the host machine, some functions will not be isolated. For example, creating a process requires permissions with UID and GID, which can match the user of the host OS. So be careful when mounting folders from the host machine and creating users.
-name your-container-name \
-restart unless-stopped \
-e POSTGRES_USER=dbuser \
-e POSTGRES_PASSWORD=your-pass
-e POSTGRES_DB=your-DB
-p 5432:5432 \
-v name-of-volume:/var/lib/postgresql/data\
postgres && docker ps
In the command to start the container, we pass environment variables:
- POSTGRES_USER - database/account username for the service on the OS;
- POSTGRES_PASSWORD - password for the account;
- POSTGRES_DB - database name;
- -p - host:container port redirection option;
- -v - virtual volume for storing the service data.
The image itself will create a user and run the service with the created database on its behalf. Now the container or a restricted process is a separate/isolated space that accepts network calls on port 5432. Accordingly, the server can be accessed through a network connection from a remote device using the IP address of the host OS and the port that was previously forwarded. You can check its availability with the commands:
Or we can use the connection, where we run the docker use psql client in the container itself:
Great, the connection has been made! Now we can manage the DB server, which is isolated from the main OS environment. In order to remove the container we will execute the commands:
After that the container with the image and the data stored for it will be deleted. If you need or want to reinstall the image, the docker volume remove name-of-volume command can be omitted to save the created databases.
In this article we have looked at a step-by-step method of deploying a container environment with a database image. This approach allows you to delimit the space between the rest of the software, which increases integrity and protects against collision of different dependencies.