Making Game: Why does my docker container keep running after I abort the command that started it?

Original Source Link

I have a batch file that I use to start a docker container in windows, in this case to start up a jupyter notebook:

@echo off
SET RootPath=C:/Path/To/Working/Folder
docker run -p 8888:8888 -v %RootPath%:/mnt/env container_name jupyter lab --port 8888 --ip= --allow-root --NotebookApp.notebook_dir=/mnt/env

This works fine, but if I close the command prompt where the batch file is running, my docker container keeps running. Why does this happen, and how to achieve the expected behaviour of the job being killed as soon as I close the command prompt?

I am running Docker

Tagged : /

Server Bug Fix: Cannot reach docker ports inside a VM in windows server 2012R2

Original Source Link

TL:DR Trying to reach docker exposed ports in windows server 2012 but no cigar

My setup: Docker toolbox via Kitematic and VB, over windows server 2012R2 (as a VM in a server).
I am running a simple Nginx docker container and cannot reach it from the machine IP, and I can reach it via the IP of the VB VM, for example,, but I managed to reach it via http://localhost:80/ using VB port forwarding (Manually using the VM Manager->Settings->Network->Advanced->Port Forwarding.

Currently, reaching the IP of the server from the inner network (on port 80) hits this page:
But, I managed to install and connect the machine via SSH, which means it is possible to expose ports to the outside, but how?

What I have tried:

  • Disabling IIS (although binding another port with a simple HTML file works only via localhost and not from the outside)
  • Opening ports in FireWall
  • Disabling FireWall
  • Editing hosts file to reach the inner VM IP from localhost
  • Running docker applications to expose more ports (ELK Stack, for example, listens on ports 9200, 9300, etc), and running NETSTAT to see what happens – and I can see those ports (log). Please notice that ports that are exposed (and reach, for example, shown as and the desired ports are exposed for

Thank you in advance.

Tagged : / /

Code Bug Fix: Docker Update Code in Volume with Gitlab CI / CD

Original Source Link

i am learning docker and i just encountered a problem i cannot solve.

I want to update source code in my docker swarm nodes when i make changes and push them. I just have a index php which echos “Hello World” and shows phpinfo. I am using data volumes since its recommended for production ( bind mounts for dev ).

my problem is: how to i update source code while using volumes? whats the best practice for this scenario?

Currently when i push changes to gitlab in my index php my gitlab-runner recreates the Docker Image and updates my swarm service.

This works when i change the php version in my Dockerfile but changes in index.php wont be affected.

My example Dockerfile looks like this. i just copy the index.php to /var/www/html in the container and thats it.

When i deploy my swarm stack the first time everything works

FROM php:7.4.5-apache
# copy files
COPY src/index.php /var/www/html/
# apahe settings
RUN echo 'ServerName localhost' >> /etc/apache2/apache2.conf

My gitlab-ci.yml looks like this

build docker image:
stage: build
  - docker build -t $CI_REGISTRY_IMAGE:latest .
  - docker push $CI_REGISTRY_IMAGE:latest
  - build-image

deploy docker image:
 stage: deploy
  - docker service update --with-registry-auth --image $CI_REGISTRY_IMAGE:latest 
  - deploy-stack

Docker images generally contain an application’s source code and the dependencies required to run it. Volumes are used for persistent data that needs to be preserved across changes to the underlying application. Imagine a database: if you upgraded from somedb:1.2.3 to somedb:1.2.4, you’d need to replace the database application binary (in the image) but would need to preserve the actual database contents (in a volume).

Especially in a clustered environment, don’t try storing your application code in volumes. If you delete the part of your deployment setup that attempts this, then when containers redeploy with an updated image, they’ll see the updated code.

Tagged : / / / /

Server Bug Fix: Why dockerize a service or application when you could install it? [closed]

Original Source Link

We have around 12 services and other applications such as presto.

We are thinking about building Docker containers for each service and application. Is it right to dockerize all of them?

When would a Docker container not be the ideal solution?


  1. Quick local environment set up for your team – if you have all your services containerized. It will be a quick environment set up for your development team.
  2. Helps Avoid the “It works on mine, but doesn’t work on yours problem” – a lot of our development issue usually stems from development environment setup. If you have your services containerized, a big chunk of this gets offloaded somewhere else.
  3. Easier deployments – while we all have different processes for deploying code, it goes to tell that having them containerized makes thing a hell lot easier.
  4. Better Version Control – as you already know, can be tagged, which helps in VERSION CONTROL.
  5. Easier Rollbacks – since you have things version controlled, it goes to say that it is easier to rollback your code. Sometimes, by just simply pointing to your previously working version.
  6. Easy Multi-environment Setup – as most development teams do, we set up a local, integration, staging and production environment. This is done easier when services are containerized, and, most of the times, with just a switch of ENVIRONMENT VARIABLES.
  7. Community Support – we have a strong community of software engineers who continuously contribute great images that can be reused for developing great software. You can leverage that support. Why re-invent the wheel, right?
  8. Many more.. but there’s a lot of great blogs out there you can read that from. =)

I don’t really see much cons with it but here’s one I can think of.

  1. Learning Curve – yes, it does have some learning curve. But from what I have seen from my junior engineers, it doesn’t take too much time to learn how to set it up. It usually takes you longer when you are figuring out how to containerized.


  1. Data Persistence – some engineers are having concerns with data persistence. You can simply fix this by mounting a volume to your container. If you want to use your own database installation, you can simply switch your HOST, DB_NAME, USERNAME and PASSWORD with the one you have in your localhost:5432 and all should be fine.

I hope this helps!

You should containerize all Linux-based services that are stateless and require frequent upgrades/changes/patches. These include all types of front-end and application servers.

Databases/datastores, on the other hand, are a more complex case, since there are issues of performance and data persistence/integrity. Also, databases are not upgraded/patched as frequently as front-end applications.

*Windows containers will only run in Windows.

Docker is a recipe for consistency and reproducibility.

To make a nice cup of tea, you need boiling water, put some tea bag in it and let it brew for three minutes. How you achieve boiling water is absolutely irrelevant.

Now let’s imagine that you need to serve up 12 cups of tea. Does your staff know how to make a proper brew? Does your staff know how to use a kettle or a pan? What guarantee do you have that each cup of tea will be the same?

You could spend a lot of time training people and make you sure you have all the appliances you need. Or you can invest in a machine that will produce the same cup of tea over and over again.

The analogy may seem stupid but my point is that relatively common problems already have well-known solutions.

Unless it’s a one-off scenario or you have additional constraints we don’t know about, what reasons do you have to not consider Docker?

There is no issue with dockerizing multiple services. I think you need to consider about following things too.

You have to think about how to save your data you have used inside the container. By default the data inside the container will destroy the the container shuts down. You may have to mount a volume in order to keep the data permanently.

You may not be able to get the bare-metal performance when running in docker.

IMO It’s not a good choice if you are going to run all the applications in docker unless you need to take the advantage of containerization. But it is easy to run stateless applications and services with docker.

Tagged : / / / /

Server Bug Fix: Different stdout with supervisord using docker vs docker-compose

Original Source Link

I have a service running inside docker using nginx and php-fpm. I have been beating my head against the wall to get all of the logs to redirect to stdout. The approach that I took was to use supervisord. Using docker-compose up my-app everything worked as expected; all of the logs are being sent to stdout. However, when I run

docker run -p 81:80 
       -v $(pwd)/myapp:/var/www/html 

I get no output.

Here is my supervisor configuration:


command=/usr/sbin/php-fpm7.0 -F

command=/usr/sbin/nginx -g "daemon off;"

command=/usr/bin/tail -f /var/log/nginx/access.log

command=/usr/bin/tail -f /var/log/nginx/error.log

command=/usr/bin/tail -f /var/log/php7.0-fpm.log

command=/usr/bin/tail -f /var/www/html/storage/logs/laravel.log

and my docker-compose

version: '3'
      context: ../../my-app
      dockerfile: docker/Dockerfile
    image: my-app 
     - "81:80"
    - ../../my-app:/var/www/html

my base dockerfile

FROM ubuntu

# Update 
RUN apt-get update --fix-missing && apt-get -y upgrade

# Install Python Setup Tools
RUN apt-get install -y python-pip

# Intall Supervisord
RUN easy_install supervisor

# Install NGINX
RUN apt-get -y install nginx

# Install PHP
RUN apt-get -y install php7.0-fpm 

# Configure PHP-FPM
RUN sed -i 's/;daemonize = .*/daemonize = no/' /etc/php/7.0/fpm/php-fpm.conf && 
    sed -i "/;clear_env = .*/cclear_env = no" /etc/php/7.0/fpm/pool.d/www.conf && 
    sed -i -e 's/max_execution_time = 30/max_execution_time = 300/g' /etc/php/7.0/fpm/php.ini && 
    sed -i -e 's/upload_max_filesize = 2M/upload_max_filesize = 50M/g' /etc/php/7.0/fpm/php.ini && 
    sed -i -e 's/post_max_size = 8M/post_max_size = 50M/g' /etc/php/7.0/fpm/php.ini && 
    sed -i -e "s/variables_order = "GPCS"/variables_order = "EGPCS"/g" /etc/php/7.0/fpm/php.ini && 
    service php7.0-fpm start && 
    service php7.0-fpm stop

COPY supervisor.conf /etc/supervisor/conf.d/supervisor.conf

CMD ["supervisord", "-n", "-c", "/etc/supervisor/conf.d/supervisor.conf"]

and my application dockerfile

FROM mybase

# Configure NGINX
COPY docker/dev2/default.conf /etc/nginx/sites-enabled/default

# Copy application into container
COPY . /var/www/html

RUN touch /var/www/html/storage/logs/laravel.log && 
    chown www-data:www-data /var/www/html/storage/logs/laravel.log && 
    chmod 644 /var/www/html/storage/logs/laravel.log

COPY docker/dev2/supervisor.conf /etc/supervisor/conf.d/supervisor.conf

CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisor.conf"]

What is the difference between docker and docker-compose where redirection to stdout is behaving differently? These containers will be deployed in AWS ECS; I haven’t tested this yet, but I am fearful that I will not get successful logging in ECS if I am experiencing this behavior with docker. Any thoughts, ideas, or suggestions would be greatly appreciated!

When you run docker-compose up output from all docker containers is aggregated and displayed in the console as per

When you simply run docker run the output is not shown and can only be retrieved with docker logs -f $CONTAINERID

You can also “hide” output with docker compose by using -d flag.

When you run your docker containers in ECS, in the task definition, you can choose the logdriver from the list

Hope this helps.

Tagged : / / /

Linux HowTo: my shell name has been changed

Original Source Link

my username in shell prompt changed to ark-newsdrive3:

(base) ark-newsdrive3: ~ maxx $

I was experimenting with docker, can it be the cause of this?





It looks as though you are in a conda or python virtual environment shell inside a docker container.

If you aren’t in a container the deactivate command should at least get you out of the virtual environment.


Should tell you who the shell is running under and

pwd #print working directory

should tell you where you are

Once out

sudo docker ps

will give you a list of running containers and you can stop a container with

docker stop my_container

where my_container is the name or id of said container.

Tagged : / /

Ubuntu HowTo: E: Malformed entry 4 in list file /etc/apt/sources.list.d/additional-repositories.list (Component) E: The list of sources could not be read [duplicate]

Original Source Link

i’m trying to install putty and i got the following error

E: Malformed entry 4 in list file /etc/apt/sources.list.d/additional-repositories.list (Component)
E: The list of sources could not be read.
E: Malformed entry 4 in list file /etc/apt/sources.list.d/additional-repositories.list (Component)
E: The list of sources could not be read.

after going through some posts on this forum . i opened the file using sudo nano /etc/apt/sources.list.d/additional-repositories.list i have the following content

deb [arch=amd64]    sylvia    stable
deb [arch=amd64]    sylvia    stable
deb [arch=amd64]    sylvia sudo add-ap$
deb [arch=amd64]
deb [arch=amd64] xenial stable

i created a space between ubuntu and stable in line 4 . but still got the sane error?
what is wrong with it ?

P.S:- While getting my hands on first time on docker, i uninstalled some packages which i don’t need . so, i would like to remove all docker files and install freshly for linux mint.

You need a distribution keyword between ubuntu and stable, e.g.

deb [arch=amd64] focal stable

if you’re on 20.04.

Tagged : / / /

Server Bug Fix: How do I set locale when building an Ubuntu Docker image with Packer?

Original Source Link

I’m using Packer to build a Docker image based on Ubuntu 14.04, i.e., in my Packer template I have:

"builders": [{
    "type": "docker",
    "image": "ubuntu",
    "commit": true

and I build it using:

$ packer build my.json

What do I need to put in the template to get a specific locale (say en_GB) to be set when I subsequently run the following?

$ sudo docker run %IMAGE_ID% locale

Additional info

As it stands, I get:


which causes a few problems for things I want to do next, like installing certain Python packages.

I’ve tried adding:

    "type": "shell",
    "inline": [
        "locale-gen en_GB.UTF-8",
        "update-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB.UTF-8 LC_ALL=en_GB.UTF-8"

but while that does set up the locale config it doesn’t affect the env used by docker run. Even if I add extra export lines like:

    "type": "shell",
    "inline": [
        "export LANG=en_GB.UTF-8"

they have no effect, presumably because when using docker run, it’s not a child process of the command packer build uses when running these commands initially.

As a workaround I can pass env vars to docker run, but don’t want to have to do that each time, e.g.:

sudo docker run -e LANG=en_GB.UTF-8 -e LANGUAGE=en_GB.UTF-8 -e LC_ALL=en_GB.UTF-8 %IMAGE_ID% locale

I haven’t tried it, but according to the documentation, you should be able to do this using the docker-import post-processor:


  "type": "docker-import",
  "repository": "local/ubuntu",
  "tag": "latest",
  "changes": [
    "ENV LC_ALL en_GB.UTF-8"

Tagged : / / /

Server Bug Fix: gitlab-runner process in container can’t find gitlab container when using docker-compose

Original Source Link

I did not resolve my problem but I know why it does not work. If you are using docker executor, when you launch a job, gitlab-runner binary will start a special container. This container is gitlab-runner-helper and will manage git, caches, etc.
The container is started by calling the Docker Engine API running on the host (localhost, the physical machine). But, since it’s started “manually”, it is not linked to any bridge network. Or, at least not linked to the docker-compose network. So, the helper does not even know that that the gitlab container exists.

The problem:

I just want to have gitlab and gitlab-runner (using docker executor) working on my localhost. I want to install and manage them with docker and docker-compose.

docker-compose.yml :

  container_name: my-container-gitlab
  image: gitlab/gitlab-ce:latest
    - "443:443"
    - "9090:80"
    - "22:22"
      external_url 'http://gitlab'
    - ./gitlab/config:/etc/gitlab
    - ./gitlab/logs:/var/log/gitlab
    - ./gitlab/data:/var/opt/gitlab

  container_name: my-container-gitlab-runner
  image: gitlab/gitlab-runner:latest
     - ./gitlab-runner/config:/etc/gitlab-runner
     - /var/run/docker.sock:/var/run/docker.sock

Gitlab works like a charm.
Runner registration works also :

docker-compose run gitlab-runner register -n 
    --url http://gitlab/ 
    --registration-token xxxxxxxxxx   
    --executor docker 
    --docker-image alpine  
    --description "My Docker Runner"

But when launching a job from Gitlab web UI, I get this :

Running with gitlab-runner 11.11.2 (ac2a293c)
      on My Docker Runner sBqMfFys
    Using Docker executor with image alpine ...
    Pulling docker image alpine ...
    Using docker image sha256:055936d3920576da37aa9bc460d70c5f212028bda1c08c0879aedf03d7a66ea1 for alpine ...
    Running on runner-sBqMfFys-project-1-concurrent-0 via 881cd3e0423c...
    Initialized empty Git repository in /builds/root/bertrand-malvaux/.git/
    Fetching changes...
    Created fresh repository.
    fatal: unable to access 'http://gitlab-ci-token:[MASKED]@gitlab/root/bertrand-malvaux.git/': 
Could not resolve host: gitlab
    ERROR: Job failed: exit code 1  

For what I investigated so far :

But, as you can see in the above error, the binary does not “see” the gitlab hostname. I modified the image to check if it can see the gitlab container outside the binary only with dumb-init, and the answer is yes.

Do you have any idea how to make this works ?

So, since I have come across this issue myself, I thought I would reply here with my fix and an explanation:

My home lab has git-lab docker setup, and I use docker-compose to deploy both the gitlab and gitlab-runner servers. What this does is create a network that links the two together, allowing for hostname recognition within the containers…but not from outside of the created network.

Gitlab-runner, I have found by default will startup a container when running a test and add it to the “bridge” docker network. If you use dockstation you can see this by checking the info of the created container before it stops, or run docker inspect against it:
docker inspect --format='{{.NetworkSettings.Networks}}' <runner temp container_id>

I had another container running on bridge and tested whether ‘gitlab’ would resolve. It would not.

Soooooooo…..there is a setting you can pass in the gitlab-runner’s config.toml:

    network_mode= <----- THIS

This setting will tell gitlab-runner which network to place the container in when it gets spun up, so if you know the name of the network your gitlab and gitlab-containers run on:

docker inspect --format='{{.NetworkSettings.Networks}}' <gitlab container id>

then add that as the network mode (my network name is git-lab_default):

    network mode= "git-lab_default"

Then voila! Your test containers will be in the same network and everything is peachy.

Hope this helps anyone with the issue.

Tagged : / /

Server Bug Fix: Docker for private server

Original Source Link

I want to set up a server (Ubuntu V-Server) for a few people. I’m planning to run a few applications e.g. GitLab, NextCloud, and Seafile.

What are the advantages of orchestrating these apps (and their dependencies like PostgreSQL) in containers via docker-compose, vs installing them directly on “bare metal” and configuring them by hand?

Tagged : / / / /