docker

A 12-post collection

Run Commands in Docker by Preserving Rights and Ownerships

docker run --user ${UID} ...  

To run commands in a container by using the same user as whoami.

It's not perfect though, it has the very limitation that the container won't know the actual user name of the user ${UID} which, in many cases, is not acceptable.

If you want to make it more realistic, make the container really aware of this user and its username, I'm afraid that you might have to mount the /etc/passwd to the container which is in most ways not favorable.

docker run --user ${UID} -v /etc/passwd:/etc/passwd:ro ...  

But, it managed to work quite well.

Another way that might serve you well is to run the container with root just like always, but run specific commands with hand-curated specific users like:

In Dockerfile:

# run with user id
RUN sudo -u "#<userid>" <command>  
# run with username
RUN sudo -u "<username>" <command>  

Note: you have to apt-get install sudo to do this. You still need to mount the /etc/passwd.

อ่านต่อ »

Enable Docker Remote API

Do you know that docker doesn't allow remote API as default. So, if you want to use the remote interpreter feature of PyCharm, you are out of luck out of the box.

In which case, you have to enable it first... here is how you do it.

the original article is here

First thing, let's see the docker status first:

sudo systemctl status docker  
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/docker.service.d
           └─override.conf
   Active: active (running) since จ. 2017-02-27 14:10:48 ICT; 4min 16s ago
     Docs: https://docs.docker.com
 Main PID: 10750 (dockerd)
   CGroup: /system.slice/docker.service
           ├─10750 /usr/bin/dockerd -H fd://
           └─10768 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-cont

... so much more ..

In CGroup, you will see that the dockerd is run with almost no arguments.

We might expect it to be like this instead to allow the local connection to the dockerd via a tcp port:

***** dockerd -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock

To realize that, we have to edit how the dockerd is started:

sudo systemctl edit docker  

Will open an editor editing a specific newly generated configuration file, which by the way should be blank in this case.

Add these lines:

[Service]
ExecStart=  
ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock  

To take effect, restart the daemon:

sudo service docker restart  
อ่านต่อ »

Docker Repice for "gcsfuse" (Google Cloud Storage FUSE)

What's gcsfuse ?

from: https://cloud.google.com/storage/docs/gcs-fuse

Cloud Storage FUSE is an open source FUSE adapter that allows you to mount Google Cloud Storage buckets as file systems on Linux or OS X systems.

gcsfuse main repository: https://github.com/GoogleCloudPlatform/gcsfuse

Dockerize it

This is the product of many hours of my trail-and-error.

In the Dockerfile:

FROM debian:latest

RUN apt-get update  
RUN apt-get install -y curl lsb-release

RUN export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s` \  
    && echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | tee /etc/apt/sources.list.d/gcsfuse.list \
    && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

RUN apt-get update  
RUN apt-get install -y gcsfuse

# need to run command with given user_id
RUN apt-get install -y sudo

# set running environments
ENV GOOGLE_APPLICATION_CREDENTIALS /credential.json  
ENV BUCKET_NAME undefined-bucket-name  
ENV UID 1  
ENV DATA_DIR /mnt

COPY entrypoint.sh /  
ENTRYPOINT ["sh", "/entrypoint.sh"]  

In the entrypoint.sh:

# just in case ...
chown -R ${UID}:${UID} ${DATA_DIR} 

# run with given user
exec sudo -u "#${UID}" gcsfuse --foreground --key-file=${GOOGLE_APPLICATION_CREDENTIALS} ${BUCKET_NAME} ${DATA_DIR}  

You may build it using: docker build -t gcsfuse .

The run command:

BUCKET_NAME=<your_bucket_name> \  
UID=${UID} \  
docker run -it --rm --privileged \  
  -v /etc/passwd:/etc/passwd:ro \
  -v <path_to_credentials>:/credential.json:ro \
  -v <your_mount_path>:/mnt:shared \
  gcsfuse

Note: :shared is crucial here ... it allows late-mouting in the container be visible to the host filesystem, without it, the host will not see anything as a result that the mounting point is being replaced by the newer one, and thus becoming obsolete.

Note2: the credential.json is the Service Account Key obtained from

อ่านต่อ »

Ubuntu 16.04 Init (with Docker)

This should be a bunch of commands, handy just for my use.

Create User

adduser user  
# add to sudo group
usermod -aG sudo user  
# from your station
ssh-copy-id [email protected]  

Or

# manually
# from your station
cat ~/.ssh/id_rsa.pub  
# copy it
# to your remote (user)
mkdir ~/.ssh  
chmod 500 ~/.ssh  
vim ~/.ssh/authorized_keys  
# paste it, save, quit
chmod 600 ~/.ssh/authorized_keys  

Swapfile

(thread: https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04)

Allocate

sudo fallocate -l 1G /swapfile  

Enable

sudo chmod 600 /swapfile  
sudo mkswap /swapfile  
sudo swapon /swapfile  
# see the result
sudo swapon --show  

Make Swap Permanent

# back up old setting
sudo cp /etc/fstab /etc/fstab.bak  
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab  

Docker

(thread: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04)

sudo apt-get update  
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D  
sudo apt-add-repository 'deb https://apt.dockerproject.org/repo ubuntu-xenial main'  
sudo apt-get update  
apt-cache policy docker-engine  

Should see:

docker-engine:  
  Installed: (none)
  Candidate: 1.11.1-0~xenial
  Version table:
     1.11.1-0~xenial 500
        500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages
     1.11.0-0~xenial 500
        500 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages
sudo apt-get install -y docker-engine  
sudo systemctl status docker  

Should see:

● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago
     Docs: https://docs.docker.com
 Main PID: 749 (docker)

Make it run without sudo:

sudo usermod -aG docker $(whoami)  
# relogin
docker info  

Docker Compose

(thread: https://docs.docker.com/compose/install/)

You might be tempted to install it from apt-get but you will get an old version.

sudo curl -L "https:
อ่านต่อ »

SSH using docker

Let's say you want to ssh using docker, and also you want to be able to access all the settings you have with our old ssh, here is how you do it !

Dockerfile:

FROM debian:jessie

RUN apt-get update  
RUN apt-get install -y --no-install-recommends openssh-client

VOLUME ["/ssh"]

COPY entrypoint.sh /usr/local/bin/  
ENTRYPOINT ["entrypoint.sh"]  

entrypoint.sh:

#!/usr/bin/env bash

# install ssh keys (/ssh -> ~/.ssh)
cp -R /ssh /root/.ssh  
chmod -R 500 /root/.ssh

# entrypoint
# redirects all arguments to the ssh command
# 'exec' will replace the process (might be useful for redirecting SIGTERM)
exec ssh [email protected]  

If we mount our .ssh directory to the /ssh of the container, it will automatically copy (with the right permission) its contents to its /root/.ssh. Thus, we will be able to use all ssh as if it were the one on our host machine.

Build the Dockerfile:

docker build -t ssh .  

Set alias for your cmd.exe:

alias ssh-docker=docker run --rm -it -v %HOME%\.ssh:/ssh:ro ssh $*  

Note: $* at the end of alias is to accept arguments.

Enjoy:

ssh-docker [email protected]  
อ่านต่อ »

Docker compose network ip specified

Normally, the IP address is not really important in the connection between Docker containers, becasue we can just use the hostnames, which are easier to use and more intuitive to use.

However, there are times when we really need to use specific IP addresses, and the Docker does provide us the mean to do it.

Here is an unrealistic example of docker-compose.yml, in which we will define A, B, C and the network called "my-network". With this network, we will link A and B together. Even so, we still have the "default" network which is, by the way, created automatically, and we use this network to connect between A and C in the normal fashion way through hostnames.

version: '2'

# define the network
networks:  
    my-network: 
        ipam:
            config:
                - subnet: 172.16.1.0/24
                  gateway: 172.16.1.1 # you cannot allocate this address to any of the containers

services:  
    A:  
        networks:
            default:
            my-network:
                ipv4_address: 172.16.1.2 # you can also let the DHCP do the job

    B:
        networks:
            my-network:
                ipv4_address: 172.16.1.3

    C:
        networks: # with default, you don't have to write this
            default:

Now, if you were to exec into the B you will find yourself able to ping "172.16.1.2" via the my-network. On the other hand, if you were to exec into the C you will be able to ping the A using the hostname.

อ่านต่อ »

Docker-compose to link outside containers ?

Let say we have the two docker-compose files in separate directories which we want to link together.

First, we have:

version: '2'  
services:  
    A:
        image: nginx

It is important to know the specific network name (external_name) to be used by the first docker-compose up. We'll need to run the first with -p to define the project name which is a part of the network name in {project_name}_default fashion.

In this case, run docker-compose -p A up it will automatically create A_default as a default network for the compose.

Second, we have:

version: '2'  
networks:  
    A_network:
        external:
            name: A_default
services:  
    B:
        image: debian
        command: bash
        networks:
            - default
            - A_network

Now, run it, this time I will just run docker-compose run B for the purspose of accessing the shell.

And try, pinging A (across the network).

[email protected]:/# ping A  
PING api (172.21.0.2): 56 data bytes  
64 bytes from 172.21.0.2: icmp_seq=0 ttl=64 time=0.101 ms  
64 bytes from 172.21.0.2: icmp_seq=1 ttl=64 time=0.072 ms  
64 bytes from 172.21.0.2: icmp_seq=2 ttl=64 time=0.072 ms  
64 bytes from 172.21.0.2: icmp_seq=3 ttl=64 time=0.073 ms  

Understand that, B also has its own default network, but it also taps to the external one as defined in Anetwork which ultimately connects to the Adefault external network created by the first compose.

อ่านต่อ »