The concept of resource partitioning now rules in many corporate network infrastructures. Now one server can contain dozens of virtual spaces that work independently of each other, some unite into common platforms or work together.
Today on the example of virtualisation technology and platform for building such a virtual environment Docker - let's install and deploy the Python container image.
Deploying a Docker container
Basically we need the platform itself, for this purpose update package indexes and install Docker, in case it is already pre-installed - skip this step. For deb-like systems we will use the automated deployment script:
curl -fsSL https://get.docker.com -o get-docker.sh \
sudo bash get-docker.sh
After all canons and precautions we need to create a user on whose behalf the container will be run, with reduced privileges. So that in case the perpetrator escapes from it by running malicious commands we can increase TTA or Time-To-Attack. To do this we use the commands:
Since only root or users of the docker group can access the Docker socket, we need to add the python user to it. After that we can start the base container with a Python environment on board.
To do this, let's create a loop in the bash interpreter and go inside the container to view the processes:
Where each of the options and arguments presented performs a different function in starting the container:
- sudo -u python docker run - starts the container with normal user rights;
- -name my_container - sets the name of the container my_container;
- -v data-storage:/home - mounts the data-storage volume to /home inside the container;
- -d - runs the container in the background (detached mode);
- python - uses a python image to create the container;
- /bin/bash -c ‘while true; do sleep 1000; done’ - executes an infinite loop inside the container:
- /bin/bash -c - runs a command via Bash;
- sudo docker ps - displays a list of running containers after a successful container startup.
To enter the container we will use a command of the form:
After that, let's update the packages on the machine and install the process manager, where we will view the structure of our running PID space:
Inside we see the aggregate of our auto-run process and the current session. Usually, for containers to run where there is no out-of-the-box service, the appropriate resources need to be prepared. When creating a cloud server from Serverspace, each of the standard repositories has a key set. To run the tests, you can use an isolated VPS server environment, on either of the two platforms vStack cloud or VMware cloud. Let's click on the Create Server button and select a configuration that suits our needs, then click Create.
If you don't have sufficient resources than you can perform actions on powerful cloud servers. Serverspace provides isolated VPS / VDS servers for common and virtualize usage.
It will take some time to deploy the server capacity. After that you can connect in any of the convenient ways.
Deploying the service in a Docker container
For example, let's take a working case where we need to deploy a web application service in such a container. To do this, we need to take a base image, let it remain a minimal set of Python environment that needs to be modified. It is considered minimal for a reason, some libraries will need to be downloaded in the process.
There are two ways to solve this: in the first case, we write an autorun command to start and put the application in the repository we will connect or rebuild the image. Of course, a more correct way of implementation is the second, it does not store a long autorun string and the container is launched immediately with your files.
Let's use the first one and save the file without leaving the previous session to the folder we have joined to the virtual storage:
After that, quit, stop and delete it:
Let's prescribe an autostart with the package updating indexes, installing the Python package manager, and running the application:
After some time of its deployment, we need to check if it has started. Let's view its logs with the command:
Great, the service was launched in a virtual environment on a machine, next to which you can launch a dozen more of them and each will run in its own isolated environment, which increases the efficiency of work without collisions of dependency packages, as well as the security of this approach.