<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Florian Lopes' blog]]></title><description><![CDATA[Discover posts about Java, Spring, and Docker.]]></description><link>https://blog.florianlopes.io/</link><generator>Ghost 3.18</generator><lastBuildDate>Sun, 19 Apr 2026 13:23:48 GMT</lastBuildDate><atom:link href="https://blog.florianlopes.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Testing Ansible roles and playbooks with Molecule]]></title><description><![CDATA[How to test Ansible roles and playbooks with Molecule? Learn how Ansible Molecule works and discover the multiple layers of Ansible tests.]]></description><link>https://blog.florianlopes.io/testing-ansible-roles-and-playbooks-with-molecule/</link><guid isPermaLink="false">5ee71cb933b3510001c08fa7</guid><category><![CDATA[Docker]]></category><category><![CDATA[Ansible]]></category><category><![CDATA[Molecule]]></category><category><![CDATA[Tests]]></category><dc:creator><![CDATA[Florian Lopes]]></dc:creator><pubDate>Mon, 15 Jun 2020 05:56:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><em>Note: this tutorial has been written for Ansible Molecule 3 (<a href="https://github.com/ansible-community/molecule/releases" target="_blank">Releases · ansible-community/molecule · GitHub</a>)</em></p>
<p>To test your roles or playbooks, you can use VirtualBox or Hyper-V to build new VMs with fresh OS installs. It involves a lot of steps (creating/building a new VM, configuring SSH keys and inventory, run the Ansible playbook, destroy VM). This process can be further enhanced using Vagrant (to provision and destroy VM instances) or Docker, for shorter feedback. However, it can be very cumbersome manually dealing with Vagrant and underlying VMs.</p>
<p>Molecule solves this problem by automating this process and can be seen as an orchestrator: it will take charge of spinning up fresh installs &amp; destroy them after the role/playbook has been executed. Depending on the chosen driver, Molecule will provision instances (<code>delegated</code>) or containers (<code>docker</code> and <code>podman</code>) to test against.</p>
<p>In this tutorial, I will use the <code>docker</code> driver as it's the default one and is often a good choice.</p>
<p>If you are interested in learning more about the <code>delegated</code> driver, here is a good write-up: <a href="https://medium.com/@fabio.marinetti81/validate-ansible-roles-through-molecule-delegated-driver-a2ea2ab395b5" target="_blank">https://medium.com/@fabio.marinetti81/validate-ansible-roles-through-molecule-delegated-driver-a2ea2ab395b5</a>.</p>
<h2 id="ansibletestinglevels">Ansible testing levels</h2>
<p>You probably heard of the <a href="https://martinfowler.com/articles/practical-test-pyramid.html" target="_blank">test pyramid</a> in software development. The test pyramid defines three layers : unit tests, integrations tests and end-to-end tests. Infrastructure as Code (IaC) testing using Ansible tools involves the same concepts:</p>
<p><img src="https://blog.florianlopes.io/content/images/2020/06/Ansible-tests-pyramid.png" alt="Ansible tests pyramid.png"></p>
<p>There are multiple levels of testing with Ansible (from bottom to top):</p>
<ul>
<li>Unit tests:
<ul>
<li>Testing yaml structure: <code>yamllint</code></li>
<li>Testing Ansible playbook structure: <code>ansible-playbook --syntax-check</code></li>
<li>Check for bad practices: <code>ansible-lint</code></li>
</ul>
</li>
<li>Integration tests: <code>molecule test</code></li>
<li>End-to-end tests: testing the actual role or playbook against a production environment using Ansible's check mode (Dry Run) <code>ansible-playbook --check</code>. You can use this mode to check that your role or playbook is idempotent</li>
</ul>
<p><em>Notes:</em></p>
<ul>
<li>As <code>ansible-playbook --syntax-check</code> is only a static check, more integration tests are needed to ensure that dynamic includes (<code>include_tasks</code>) work as expected</li>
<li>Ansible's dry-run mode will not make any changes on the target system. It will only report what changes would have been made without the check mode.</li>
<li>Idempotency is the ability to run a task multiple times with the same result (ie. don't run the task again if the target has the desired state).</li>
</ul>
<h2 id="howitworks">How it works?</h2>
<br>
<h3 id="testingsteps">Testing steps</h3>
<p>When running tests (<code>molecule test</code>), Molecule goes through a series of steps (the test matrix). Here is a summary:</p>
<ul>
<li><strong>dependency</strong>: collect required dependencies (roles, collections) using specified dependency manager (in <code>molecule.yml</code>), Galaxy is the default one</li>
<li><strong>lint</strong>: lint project using an external shell command (<code>ansible-lint</code> is recommended)</li>
<li><strong>cleanup</strong>: (using a provided <code>cleanup.yml</code>, specified in <code>molecule.yml</code>). This playbook is used to clean up test infrastructure set up in the <code>prepare</code> phase. This step is executed directly before every <code>destroy</code> step.</li>
<li><strong>destroy</strong>: destroy the target instance against which the playbook has been run</li>
<li><strong>create</strong>: create the target instance, using the defined <code>driver</code> in <code>molecule.yml</code></li>
<li><strong>prepare</strong>: prepare the instance: install any needed packages for the tested role/playbook</li>
<li><strong>converge</strong>: the actual test, import the role/playbook and run it</li>
<li><strong>verify</strong>: verify that the role/playbook has been correctly imported/executed, using the specified <code>verifier</code> in the <code>molecule.yml</code> file</li>
<li><strong>idempotence</strong>: run again the <code>converge</code> phase, and ensure that the role/playbook is idempotent. Under the hood, Molecule runs the playbook used at the <code>converge</code> step and checks for the <code>changed</code> boolean in the return values, indicating that a task had to make changes and that the idempotency is not guaranteed. The step will fail with errors.</li>
</ul>
<p>Additionally, each step can be run independently:</p>
<pre><code class="language-bash">molecule &lt;step&gt;
</code></pre>
<h2 id="installingmolecule">Installing Molecule</h2>
<p>Molecule is easy to install:</p>
<pre><code class="language-bash">pip3 install molecule
</code></pre>
<p><em><strong>Note: The Molecule team highly <a href="https://molecule.readthedocs.io/en/latest/installation.html#pip" target="_blank">recommends</a> to install it in a Python virtual environment using <a href="https://realpython.com/intro-to-pyenv/" target="_blank">pyenv</a>.</strong></em></p>
<p>If you are not very familiar with Python or don't have any valid Python install, I recommend its use through Docker:</p>
<pre><code class="language-bash">docker run --rm -it quay.io/ansible/molecule:3.0.0 molecule --version
</code></pre>
<p>It's fair easy and allows you to get started without installing <code>Python</code>, <code>pip</code>, and its various dependencies.</p>
<h2 id="initializinganewrole">Initializing a new role</h2>
<p>Initializing a new role using Molecule is also easy:</p>
<pre><code class="language-bash">molecule init role
</code></pre>
<p>Using <code>Docker</code> (<code>Linux</code>):</p>
<pre><code class="language-bash">docker run --rm -it \
    -v &quot;$(pwd)&quot;:/molecule/:ro \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -w /molecule/ \
    quay.io/ansible/molecule:3.0.0 \
    molecule init role role.name
</code></pre>
<p>On <code>Windows</code>:</p>
<pre><code class="language-bash">docker run --rm -it ^
-v &quot;%cd%&quot;:/molecule/:ro ^
-v /var/run/docker.sock:/var/run/docker.sock ^
-w /molecule/ ^
quay.io/ansible/molecule:3.0.0 ^
molecule init role role.name
</code></pre>
<p>Ansible's Galaxy users will not be lost as Molecules uses it to generate role layouts. If you are not familiar with Ansible Galaxy, you can review the directory structure <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-directory-structure" target="_blank">here</a>.</p>
<h3 id="moleculelayout">Molecule layout</h3>
<pre><code class="language-bash">molecule
└── default
    ├── converge.yml
    ├── Dockerfile.j2
    ├── INSTALL.rst
    ├── molecule.yml
    └── verify.yml
</code></pre>
<h4 id="themoleculeymlfile">The <code>molecule.yml</code> file</h4>
<p>The <code>molecule.yml</code> file is particularly important as it is used to configure Molecule.</p>
<pre><code class="language-yml">---
dependency:
  name: galaxy
driver:
  name: docker
platforms:
  - name: instance
    image: docker.io/pycontribs/centos:7
    pre_build_image: true
provisioner:
  name: ansible
verifier:
  name: ansible
lint: |
  set -e
  yamllint .
  ansible-lint
</code></pre>
<p>Here are the most important sections:</p>
<h5 id="verifier">verifier:</h5>
<p>As of Molecule 3.0, Ansible is now the default <code>verifier</code>. It simply is an <code>Ansible</code> playbook where you can write specific state checking tests on the target instance. Optionally, you can use <a href="https://molecule.readthedocs.io/en/latest/configuration.html#testinfra" target="_blank">Testinfra</a> if you are familiar with this tool. I prefer sticking with the <code>ansible</code> verifier as I don't want to switch language to write my tests. For the rest of this tutorial, I will assume that the <code>verifier</code> is set to <code>ansible</code>.</p>
<h5 id="platforms">platforms:</h5>
<p>In this section, you can specify the Docker image used to create the target instance. You can also mount volumes or publish ports. If you want to test against multiple distributions (CentOS, Fedora, Debian), you can use an environment variable:</p>
<pre><code class="language-yaml">platforms:
  - name: instance
    image: ${DOCKER_IMAGE_DISTRIBUTION}
    command: ${DOCKER_IMAGE_COMMAND:-&quot;&quot;}
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    published_ports:
      - 0.0.0.0:80:8080
    privileged: true
    pre_build_image: true
</code></pre>
<h5 id="lint">lint:</h5>
<p>This section is used to specify an external command that Molecule will use to handle project linting.</p>
<h4 id="theconvergeymlfile">The <code>converge.yml</code> file</h4>
<p>This is where you import your role. It simply is an Ansible playbook that Molecule runs right after the instance creation (setup). Typically, this file will be similar to this:</p>
<pre><code class="language-yaml">---
- name: Converge
  hosts: all
  become: true

  pre_tasks:
    - name: Ensure openssh-server is installed.
      package:
        name:
          - openssh-server
        state: present

  roles:
    - role: my-role
</code></pre>
<p>The <code>pre_tasks</code> section allows you to prepare the instance before importing your role.</p>
<p>This file can be run independently using this command:<br>
<code>molecule converge</code></p>
<h4 id="theverifyymlfile">The <code>verify.yml</code> file</h4>
<p>Finally, the <code>verify.yml</code> file will contain Ansible instructions to verify that your role has been correctly installed on the instance. This another Ansible playbook is run immediately after the role import (<code>converge.yml</code> file).</p>
<p>For instance, here is a <code>verify.yml</code> file used to check that <code>Nginx</code> is correctly serving web requests:</p>
<pre><code class="language-yaml">- name: Verify
  hosts: all
  
  tasks:
    - name: Verify Nginx is serving web requests
      uri:
        url: http://localhost/
        status_code: 200
</code></pre>
<p>Similarly to the <code>converge</code> step, the <code>verify</code> step will run the <code>verify.yml</code> playbook. This command can be used to run it without launching the entire Molecule sequence:<br>
<code>molecule verify</code></p>
<p>Molecule will run this playbook against the target instance, created earlier.</p>
<h2 id="typicaltestworkflow">Typical test workflow</h2>
<p>As stated at the beginning of this tutorial, Molecule will go through a long series of steps (the <code>test matrix</code>), listed below by Molecule itself:</p>
<pre><code class="language-bash">--&gt; Test matrix
    
└── default
    ├── dependency
    ├── lint
    ├── cleanup
    ├── destroy
    ├── syntax
    ├── create
    ├── prepare
    ├── converge
    ├── idempotence
    ├── side_effect
    ├── verify
    ├── cleanup
    └── destroy

</code></pre>
<p>This test matrix is similar to the <a href="https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html" target="_blank">Maven lifecycle</a>, for those who know well Maven (welcome Java developers ;).</p>
<p>As the entire test matrix takes some time to complete, a typical workflow when developing a role or playbook could be:</p>
<ol>
<li><code>molecule create</code>: create the target instance.</li>
<li><code>molecule converge</code>: run the actual playbook against the created instance. Additionally, lint your files using <code>molecule lint</code>.</li>
<li><code>molecule verify</code>: ensure that the written tests are green.</li>
<li>Bring modifications to your playbook.</li>
<li>Run again <code>molecule converge</code> / <code>molecule verify</code> to test your modifications.</li>
<li>Additionally, if you left the target instance in a broken state, you can destroy it using <code>molecule destroy</code>, you would then recreate it using <code>molecule create</code>.</li>
<li>Run <code>molecule converge</code> to test your changes.</li>
<li>Finally, run the entire test cycle <code>molecule test</code> to ensure your role is correctly working, in particular idempotency.</li>
</ol>
<p><em>Note: another way to ensure that your playbook is idempotent would be to run <code>molecule converge</code> twice. However you would have to pay attention to the Ansible tasks return values, and check for <code>changed</code>, as specified in the <a href="#testingsteps">testing steps</a> section.</em></p>
<h2 id="debuggingamoleculetest">Debugging a Molecule test</h2>
<p>When your tests are failing, it can be very useful to inspect the created instance to see what's happening inside.</p>
<h3 id="thedestroyneverflag">The <code>--destroy=never</code> flag</h3>
<p>The <code>--destroy=never</code> flag simply tells Molecule to not destroy the created instance after running the tests, allowing you to inspect it:</p>
<p><code>molecule test --destroy=never</code></p>
<p>Using <code>Docker</code> (<code>Linux</code>):</p>
<pre><code class="language-bash">docker run --rm -it \
	-v &quot;$(pwd)&quot;:/molecule/:ro \
	-v /var/run/docker.sock:/var/run/docker.sock \
	-v $HOME/.cache/:/root/.cache/ \
	-w /molecule/ \
	quay.io/ansible/molecule:3.0.0 \
	molecule test --destroy=never
</code></pre>
<p>On <code>Windows</code>:</p>
<pre><code class="language-bash">docker run --rm -it ^
-v &quot;%cd%&quot;:/molecule/:ro ^
-v ~/.cache/:/root/.cache/ ^
-v /var/run/docker.sock:/var/run/docker.sock ^
-w /molecule/ ^
quay.io/ansible/molecule:3.0.0 ^
molecule test --destroy=never
</code></pre>
<h3 id="loginintotheinstance">Login into the instance</h3>
<p>To inspect the instance state, simply issue this command:</p>
<pre><code class="language-bash">molecule login
</code></pre>
<p>This will directly logs you into the Molecule instance used to test your playbook against.</p>
<p>Using <code>Docker</code> (<code>Linux</code>):</p>
<pre><code class="language-bash">docker run --rm -it \
    -v &quot;$(pwd)&quot;:/molecule/:ro \
    -v $HOME/.cache/:/root/.cache/ \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -w /molecule/ \
    quay.io/ansible/molecule:3.0.0 \
    molecule login
</code></pre>
<p>On <code>Windows</code>:</p>
<pre><code class="language-bash">docker run --rm -it ^ 
-v &quot;%cd%&quot;:/molecule/:ro ^
-v ~/.cache/:/root/.cache/ ^
-v /var/run/docker.sock:/var/run/docker.sock ^
-w /molecule/ ^
quay.io/ansible/molecule:3.0.0 ^
molecule login
</code></pre>
<p><em>Note: If you use Molecule through Docker, you won't be able to log into the target instance, unless you mount the <code>$HOME/.cache</code> host directory into the container, for each Molecule commands involving interactions with the target instance.</em></p>
<pre><code class="language-bash">ERROR: Instances not created.  Please create instances first.
</code></pre>
<p>Indeed, Molecule uses this directory to store its state (the Docker container name of the created instance for example).</p>
<p>As the test instance is created using Docker (Docker driver), you can also issue a <code>docker ps</code> command and execute an interactive shell on the container:</p>
<pre><code class="language-bash">docker exec -it &lt;container_id&gt; sh
</code></pre>
<h2 id="testingansibleplaybooks">Testing Ansible playbooks</h2>
<p>As the <code>converge.yml</code> file is just an Ansible playbook that is run to execute tests, testing a playbook is quite similar to testing a role. First, issue this command to create a new scenario in your playbook's directory:</p>
<pre><code class="language-bash">molecule init scenario
</code></pre>
<p>Then, instead of including a role in the <code>converge.yml</code> file, simply import a playbook:</p>
<pre><code class="language-yaml">---
- name: Converge
  hosts: all
  become: true

  pre_tasks:
    - name: Ensure openssh-server is installed.
      package:
        name:
          - openssh-server
        state: present

- import_playbook: ../../playbook.yml

</code></pre>
<p>Run the <code>molecule converge</code> command to execute the playbook. You can also use a <code>verify.yml</code> file to ensure your playbook works <a href="#theverifyymlfile">as expected</a>.</p>
<p>Make sure the target hosts in your playbook match the ones defined in the <code>converge.yml</code> file.</p>
<hr>
<h2 id="recommendedresources">Recommended resources</h2>
<h3 id="books">Books</h3>
<p>The must-read book on this topic is obviously <a href="https://amzn.to/2U0dqyL" target="_blank">Ansible for DevOps</a> by the famous Jeff Geerling, which is also the author of 99 Ansible <a href="https://galaxy.ansible.com/geerlingguy" target="_blank">roles</a>.</p>
<h3 id="courses">Courses</h3>
<p>For those who prefer watching screencasts over reading books, here is a great one on Pluralsight, authored by RedHat:<br>
<a href="https://app.pluralsight.com/library/courses/ansible-fundamentals/table-of-contents" target="_blank">https://app.pluralsight.com/library/courses/ansible-fundamentals/table-of-contents</a></p>
<p>For an in-depth tutorial, head over this course:<br>
<a href="https://app.pluralsight.com/library/courses/getting-started-ansible/table-of-contents" target="_blank">https://app.pluralsight.com/library/courses/getting-started-ansible/table-of-contents</a></p>
<p>If you don't own a Pluralsight account yet, use this <a href="http://referral.pluralsight.com/mQgvS2w" target="_blank">link</a> to get 50% off your first month or 15% off an annual subscription.</p>
<h3 id="docs">Docs</h3>
<p>The <a href="https://molecule.readthedocs.io/en/latest/index.html" target="_blank">official documentation</a>.</p>
<a href="http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=12728585" style="display:none;" target="_blank" rel="tag"><!--kg-card-end: markdown--></a>]]></content:encoded></item><item><title><![CDATA[Unit tests for Docker images]]></title><description><![CDATA[Unit testing structure and content of Docker images using Google's container-structure-test framework.]]></description><link>https://blog.florianlopes.io/unit-tests-for-docker-images/</link><guid isPermaLink="false">5eda297e26dda700015ae70f</guid><category><![CDATA[Docker]]></category><category><![CDATA[unit-tests]]></category><dc:creator><![CDATA[Florian Lopes]]></dc:creator><pubDate>Wed, 11 Jul 2018 18:51:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="unittestingdockerimageswithgooglecontainerstructuretests">Unit testing Docker images with Google Container Structure Tests</h1>
<br>
When engineering a new Docker image, it can be difficult to ensure the Dockerfile instructions are accurate and working as we intended. Instead of discovering errors/bugs at runtime, writing tests will help you to catch these ones during the development phase.
<p>To help maintainers write unit tests for their Docker images, the Google's Container Tools team has released a framework which provides a simple way to test the structure and content of a Docker image. Mostly written in Go, it is pretty mature since Google's teams have been using their framework for more than a year.</p>
<p><em>This tutorial has been written for the 1.3.0 version</em></p>
<h2 id="typesofunittestsfordockerimages">Types of unit tests for Docker images</h2>
<p>By providing 4 types of unit tests, this framework will help you to ensure that the required content/commands are available at runtime when shipping the Docker image:</p>
<ul>
<li>command tests (run the specified command inside the container and verify the correct execution)</li>
<li>file existence tests (check the existence of a specified file inside the container)</li>
<li>file content tests (check the content of a specified file)</li>
<li>metadata tests (check the container configuration: environment variables, volumes, entrypoints, ports, etc.).</li>
</ul>
<h3 id="runningtests">Running tests</h3>
<p>You can either run the tests using the binary (Linux) or the Docker image (Windows).</p>
<p>The framework runs the specified tests from a given <code>.yaml</code> or <code>.json</code> file. The available tests types are listed in the next <a href="#testingdockerfileinstructions">section</a>.</p>
<h4 id="usingthegooglecontainerstructuretestbinarylinux">Using the Google Container Structure test binary (Linux)</h4>
<p>Simply download the latest binary <a href="https://storage.googleapis.com/container-structure-test/latest/container-structure-test-linux-amd64" target="_blank">here</a>. The usage is straightforward:</p>
<pre><code class="language-sh">./container-structure-test-linux-amd64 test --image sample-docker-image sample_test_config.yaml
</code></pre>
<h4 id="usingadockerimage">Using a Docker image</h4>
<p>As the Google Container Structure Tests binary is only compatible with Linux, I have developed a Docker <a href="https://github.com/f-lopes/container-structure-test-docker" target="_blank">image</a> which make easy to run Docker tests in a Windows environment:</p>
<pre><code class="language-sh">docker run --rm -v &quot;&lt;path-to-tests-config-file&gt;:/test-config/tests_config.yaml&quot; \
  -v /var/run/docker.sock:/var/run/docker.sock flopes/container-structure-test-docker &quot;test --image &lt;image-to-test&gt; --config tests_config.yaml&quot;
</code></pre>
<h3 id="testingdockerfileinstructions">Testing Dockerfile instructions</h3>
<p>As said before, the test configurations listed below have to be placed in a <code>.yaml</code> or <code>.json</code> file.</p>
<h4 id="copyadd">COPY, ADD</h4>
<pre><code class="language-Dockerfile">FROM alpine:3.7
ADD entrypoint.sh /entrypoint.sh
</code></pre>
<p>To test the Docker <code>ADD</code>/<code>COPY</code> instruction, you can use the following test configuration:</p>
<pre><code class="language-yaml">schemaVersion: '2.0.0'
fileExistenceTests:
- name: 'entrypoint'
  path: '/entrypoint.sh'
  shouldExist: true
  permissions: '-rwxr-xr-x'
</code></pre>
<p>This section will ensure that the specified file exists at the specified location (<code>path</code>) with the correct permissions. Note the possibility to test the nonexistence of a file.</p>
<p>To enhance this test, you can also add a <code>fileContentTests</code> section:</p>
<pre><code class="language-yaml">schemaVersion: '2.0.0'
fileContentTests:
- name: 'entrypoint'
  path: '/entrypoint.sh'
  expectedContents: ['echo']
</code></pre>
<p>Note that you can also use the <code>excludedContents</code> field to ensure that the specified file does NOT contain the given content.</p>
<h4 id="run">RUN</h4>
<p>To test the <code>RUN</code> instruction, you can either use a <code>fileExistenceTests</code> or <code>fileContentTests</code>, if you download or create a file using this instruction.<br>
However, when using the <code>RUN</code> instruction to install packages or binaries, the <code>commandTests</code> will be relevant:</p>
<pre><code class="language-Dockerfile">FROM alpine:3.7

RUN apk add --update curl
</code></pre>
<p>The corresponding test in the <code>.yaml</code> file:</p>
<pre><code class="language-yaml">schemaVersion: '2.0.0'
commandTests:
  - name: &quot;curl package installation&quot;
    setup: [[&quot;/entrypoint.sh&quot;]]
    command: &quot;which&quot;
    args: [&quot;curl&quot;]
    expectedOutput: [&quot;/usr/bin/curl&quot;]
</code></pre>
<h3 id="testingimagemetadatas">Testing Image metadatas</h3>
<p>The <code>metadataTest</code> section will check the following container instructions:</p>
<ul>
<li><code>ENV</code></li>
<li><code>LABEL</code></li>
<li><code>ENTRYPOINT</code></li>
<li><code>CMD</code></li>
<li><code>EXPOSE</code></li>
<li><code>VOLUME</code></li>
<li><code>WORKDIR</code></li>
</ul>
<p>To show its usage, let's test the following Dockerfile:</p>
<pre><code class="language-Dockerfile">FROM alpine:3.7

LABEL MAINTAINER=&quot;Florian Lopes&quot;

ENV PROFILE DEV

ADD entrypoint.sh /entrypoint.sh

VOLUME /volume

WORKDIR /

EXPOSE 80

ENTRYPOINT [&quot;/entrypoint.sh&quot;]

CMD [&quot;--help&quot;]
</code></pre>
<pre><code class="language-sh">docker build -t metadata .
</code></pre>
<p>The test configuration:</p>
<pre><code class="language-yaml">schemaVersion: '2.0.0'
metadataTest:
  env:
  - key: 'PROFILE'
    value: 'DEV'
  labels:
  - key: 'MAINTAINER'
    value: 'Florian Lopes'
  volumes: ['/volume']
  workdir: ['/']
  exposedPorts: ['80']
  entrypoint: ['/entrypoint.sh']
  cmd: ['--help']
</code></pre>
<p>Run the tests:</p>
<pre><code class="language-sh"> ./container-structure-test-linux-amd64 test --image test --config test_config.yaml
</code></pre>
<pre><code class="language-sh">=========================================
====== Test file: tests_config.yml ======
=========================================

=== RUN: Metadata Test
--- PASS

=========================================
================ RESULTS ================
=========================================
Passes:      1
Failures:    0
Total tests: 1

PASS
</code></pre>
<h3 id="advancedusage">Advanced usage</h3>
<h4 id="setupteardowncommands">Setup/teardown commands</h4>
<h5 id="setupcommand">Setup command</h5>
<p>Sometimes, an image needs an <code>ENTRYPOINT</code> instruction in order to properly initialize the container. As the Google Container Structure Tests framework works by overriding container entrypoint (see <a href="https://github.com/GoogleContainerTools/container-structure-test/#image-entrypoint" target="_blank">here</a>), the defined <code>ENTRYPOINT</code> in the Dockerfile will not be honored.<br>
To overcome this limitation, you can use the <code>setup</code> field to run an <code>entrypoint</code> script.</p>
<p>For example, let's say we want to install the <code>curl</code> package at the container startup:</p>
<pre><code class="language-Dockerfile">FROM alpine:3.7

COPY entrypoint.sh /entrypoint.sh

RUN chmod +x /entrypoint.sh

ENTRYPOINT [&quot;/entrypoint.sh&quot;]
</code></pre>
<pre><code class="language-sh">#!/bin/sh
apk add --update curl
</code></pre>
<p>To ensure the <code>/entrypoint.sh</code> script is executed, we provide the <code>setup</code> field with the <code>/entrypoint.sh</code> script.</p>
<pre><code class="language-yaml">schemaVersion: '2.0.0'
commandTests:
  - name: &quot;curl package installation&quot;
    setup: [[&quot;/entrypoint.sh&quot;]]
    command: &quot;which&quot;
    args: [&quot;curl&quot;]
    expectedOutput: [&quot;/usr/bin/curl&quot;]
</code></pre>
<h5 id="teardowncommand">Teardown command</h5>
<p>As the <code>setup</code> field, the <code>teardown</code> one can be used to execute commands after the actual test command.</p>
<pre><code class="language-yaml">schemaVersion: '2.0.0'
commandTests:
  - name: &quot;curl package installation&quot;
    teardown: [[&quot;/entrypoint.sh&quot;]]
    command: &quot;which&quot;
    args: [&quot;curl&quot;]
    expectedOutput: [&quot;/usr/bin/curl&quot;]
</code></pre>
<h4 id="samples">Samples</h4>
<p>You can see a full sample for the spring-boot-docker <a href="https://github.com/f-lopes/spring-boot-docker/tree/master/test" target="_blank">image</a> or the container-structure-test Docker image <a href="https://github.com/f-lopes/container-structure-test-docker/tree/master/test" target="_blank">itself</a>.</p>
<h2 id="automatingdockerimagestests">Automating Docker images tests</h2>
<p>Automating Docker image tests is easy, simply provide your CI environment with the container-structure-test binary or the Docker image.</p>
<p>An example demonstrating Travis CI integration is available <a href="https://github.com/f-lopes/spring-boot-docker/tree/master/test" target="_blank">here</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[From WordPress to Ghost using Docker]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h1 id="migratingwordpresstoghostusingdocker">Migrating Wordpress to Ghost using Docker</h1>
<p><img src="https://blog.florianlopes.io/content/images/2017/07/wordpress-to-ghost-1.jpg" alt="ghost-logo-big.png"></p>
<p><em>Note: This post has been written for WordPress v. 4.3.9 and Ghost v. 0.11.7</em></p>
<p>For the last year, my blog was hosted in a WordPress Docker container. I've now migrated my blog to <a href="https://ghost.org/" target="_blank">Ghost</a>, still using Docker.</p>
<p>This post explains</p>]]></description><link>https://blog.florianlopes.io/docker-move-from-wordpress-to-ghost/</link><guid isPermaLink="false">5eda297e26dda700015ae70e</guid><category><![CDATA[Docker]]></category><category><![CDATA[WordPress]]></category><category><![CDATA[Ghost]]></category><dc:creator><![CDATA[Florian Lopes]]></dc:creator><pubDate>Sat, 15 Jul 2017 07:18:00 GMT</pubDate><media:content url="https://blog.florianlopes.io/content/images/2017/07/wordpress-to-ghost-1-1.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="migratingwordpresstoghostusingdocker">Migrating Wordpress to Ghost using Docker</h1>
<img src="https://blog.florianlopes.io/content/images/2017/07/wordpress-to-ghost-1-1.jpg" alt="From WordPress to Ghost using Docker"><p><img src="https://blog.florianlopes.io/content/images/2017/07/wordpress-to-ghost-1.jpg" alt="From WordPress to Ghost using Docker"></p>
<p><em>Note: This post has been written for WordPress v. 4.3.9 and Ghost v. 0.11.7</em></p>
<p>For the last year, my blog was hosted in a WordPress Docker container. I've now migrated my blog to <a href="https://ghost.org/" target="_blank">Ghost</a>, still using Docker.</p>
<p>This post explains the reasons that got me to move to Ghost and how I did it.</p>
<h2 id="whydidimovetoghost">Why did I move to Ghost?</h2>
<h3 id="acomplexwritingworkflow">A complex writing workflow</h3>
<p>Writing my posts using plain text then formatting it (using integrated WYSIWYG/html editor) was really painful.</p>
<p>My workflow was the following:</p>
<ol>
<li>Writing draft in Evernote</li>
<li>Formatting post in WordPress WYSIWYG editor</li>
<li>Formatting code using a plugin</li>
<li>Fixing it using HTML editor</li>
</ol>
<p>I really wanted to keep a version of my posts in Evernote or another note tool. Since Evernote didn't recognize either HTML or Markdown syntax, I also switched to <a href="https://www.inkdrop.app/?r=Ysb2B8WUq" target="_blank">Inkdrop</a>. This combination of tools allows me to really simplify my workflow:</p>
<ol>
<li>Writing &amp; formatting my post using Markdown (code included)</li>
<li>Importing my post to Ghost</li>
</ol>
<h3 id="myexperiencewithwordpress">My experience with WordPress</h3>
<p>During this year, I encountered these very commons problems with <strong>WordPress</strong>:</p>
<ul>
<li>I broke my installation after installing a new plugin or updating WordPress version (white screen), I experienced this issue a few times (I was able to recover my blog thanks to UdraftPlus)</li>
<li>Performance: since I've installed a lot of required plugins, the site was slow and pretty heavy yet it didn't contain a lot of posts</li>
<li>No Markdown support; I tried many plugins (which consist to convert Markdown to HTML), no one fully satisfied me.</li>
</ul>
<p>On the other hand, <strong>Ghost</strong> offers the following advantages for me:</p>
<ul>
<li><strong>Built-in Markdown support!</strong></li>
<li>Serves content very fast</li>
<li>SEO support without plugin</li>
<li>A very light and simple blogging system</li>
<li>Written in Javascript (<a href="https://nodejs.org/" target="_blank">NodeJS</a>), a well known technology for most developers</li>
<li>Integrated email service (inside the Docker image)</li>
</ul>
<h2 id="themigrationfromwordpresstoghost">The migration from WordPress to Ghost</h2>
<h3 id="exportwordpressdata">Export WordPress data</h3>
<h4 id="exportposts">Export posts</h4>
<p>This part is the easiest as the Ghost team has developed a plugin to simplify the migration from WordPress.</p>
<ol>
<li>
<p>Install and activate the WordPress &quot;Ghost&quot; plugin (<a href="https://wordpress.org/plugins/ghost/" target="_blank">https://wordpress.org/plugins/ghost/</a>).</p>
</li>
<li>
<p>Download the Ghost file and export your data (.json):<br>
<img src="https://blog.florianlopes.io/content/images/2017/07/wordpress-export-posts.png" alt="From WordPress to Ghost using Docker"></p>
</li>
</ol>
<h4 id="exportimages">Export images</h4>
<p>WordPress stores its images in the <code>wp-content/uploads</code> directory. Since the WordPress instance is running within a Docker container, we have to find out where this directory is located. If your container's <code>wp-content</code> directory is bound on your host, simply go to this folder and backup its content.</p>
<p>If you haven't, you can guess its location using Docker CLI:</p>
<pre><code class="language-shell">docker inspect --format='{{range .Mounts}}{{.Source}}{{end}}' 02cd84ed3689
</code></pre>
<p>Where <code>02cd84ed3689</code> is your Wordpress container ID</p>
<p>This command should return the location where the WordPress container is storing its data.</p>
<p>Go to the specified directory and save its content wherever you want.</p>
<h4 id="exportcomments">Export comments</h4>
<p>Unfortunately, the only way to export comments to Ghost is using Disqus.</p>
<p>Install the Disqus plugin and go to Comments -&gt; Disqus in the left-side admin panel.</p>
<p>Click on Plugin Settings then click on export comments like shown below:<br>
<img src="https://blog.florianlopes.io/content/images/2017/07/wordpress-export-comments.png" alt="From WordPress to Ghost using Docker"></p>
<p>Your blog comments should now have been exported to Disqus.</p>
<h3 id="importwordpressdataintoghost">Import Wordpress data into Ghost</h3>
<p>From now on, I assume you already have an instance of Ghost running on your server. If it isn't the case, you can get started using this simple <code>docker-compose</code> file:</p>
<pre><code>version: '2'
services:
  blog:
    image: ghost:0.11.7
    container_name: ghost
    expose:
     - &quot;2368&quot;
    volumes:
     - &quot;/where/you/want/to/store/ghost/content:/var/lib/ghost&quot;
    restart: always
    environment:
     - VIRTUAL_HOST=blog.yourdomain.com,www.blog.yourdomain.com
     - NODE_ENV=production
     - PUBLIC_URL=https://blog.yourdomain.com
</code></pre>
<p><em>Note that this container doesn't listen to port 80 as I'm using an nginx-proxy container (<a href="https://github.com/jwilder/nginx-proxy" target="_blank">https://github.com/jwilder/nginx-proxy</a>).</em></p>
<h4 id="importposts">Import posts</h4>
<p>Before importing posts, you need to update images links because Wordpress stores images in the <code>wp-content/uploads</code> directory whereas Ghost stores them in <code>content/images</code>.</p>
<p>To make the links compatible with Ghost, we can use this script:</p>
<pre><code class="language-shell">sed -i.original -e 's|wp-content\\/uploads|content\\/images|g' ${your-export.json}
</code></pre>
<p>This command should create a backup file and replace <code>wp-content/uploads</code> by <code>content/images</code> in the json file.</p>
<p>The json file is now ready to be imported. This is the easiest part, as Ghost integrates a function to import json data.<br>
To do so, simply go to your blog admin panel, click on Labs and then select the json file generated by the WordPress has generated.</p>
<p><img src="https://blog.florianlopes.io/content/images/2017/07/Import-data-to-ghost.png" alt="From WordPress to Ghost using Docker"></p>
<h4 id="importimages">Import images</h4>
<p>Ghost stores its images in the <code>/var/lib/ghost/images</code> directory in the container. Depending on where you bound this directory on the host<br>
Simply copy theses files to this location.</p>
<p>You can find the container mounts using the Docker CLI:</p>
<pre><code class="language-shell">docker inspect --format='{{json .Mounts}}' 3053659ae689
</code></pre>
<p>Where <code>3053659ae689</code> is your Ghost container ID.</p>
<p>Find the mount where <code>Destination</code> is <code>/var/lib/ghost</code>. The <code>Source</code>/images directory is where you have to copy the WordPress medias you exported.</p>
<h4 id="importcomments">Import comments</h4>
<p>As your comments are now kept safe in Disqus, the import process consists in adding a simple javascript snippet that will retrieve these comments.</p>
<h5 id="adddisqusplugintoghost">Add Disqus plugin to Ghost</h5>
<p>Depending on the active theme in your Ghost installation, a Disqus module may be already present.<br>
If not, you can set up Disqus using this guide: <a href="https://disqus.com/profile/signup/intent/" target="_blank">https://disqus.com/profile/signup/intent/</a>.</p>
<p>If you are using the Ghostium theme <a href="https://github.com/oswaldoacauan/ghostium" target="_blank">https://github.com/oswaldoacauan/ghostium</a>, the Disqus module is already installed, all you have to do is to put your Disqus shortname in the <code>src/partials/custom/config.hbs</code> file.</p>
<hr>
<h2 id="handlingseo">Handling SEO</h2>
<p>When changing your CMS system, you should also think about the impacts it will have on SEO. Leaving 404 errors can impact negatively your search ranking.</p>
<h3 id="fromwordpresscategoriestoghosttags">From WordPress categories to Ghost tags</h3>
<p>Ghost doesn't make use of categories. Instead, Ghost relies on tags, which is essentially the same concept. In our use case, if a user tries to access an old page <code>/category/java</code>, the Ghost server will by default return a 404. Instead, it would be much better to tell the navigator (hence the user) this content has moved.</p>
<p>The best way to tell search engines or users that your content has moved is to return a 301 HTTP code (permanent redirect).</p>
<h4 id="usingnginxtoredirectcategoriestotags">Using NGinx to redirect categories to tags</h4>
<p>I found that the best method to set up a permanent redirect was to configure NGinx.</p>
<p>I've set up the following redirections within NGinx:</p>
<pre><code class="language-shell">rewrite ^\/category\/(.*) https://blog.florianlopes.io/tag/$1/ permanent;
</code></pre>
<p>This line tells NGinx to respond with a redirection to <code>/tag/*</code> when the URL <code>/category/*</code> is reached.</p>
<p><strong>/sitemap_index.xml -&gt; sitemap.xml</strong></p>
<pre><code class="language-shell">rewrite ^\/sitemap_index.xml https://blog.florianlopes.io/sitemap.xml permanent;
</code></pre>
<p><strong>/post_tag-sitemap.xml -&gt; sitemap-tags.xml</strong></p>
<pre><code class="language-shell">rewrite ^\/post_tag-sitemap.xml https://blog.florianlopes.io/sitemap-tags.xml permanent;
</code></pre>
<h3 id="customredirectionwithjwildernginxproxy">Custom redirection with jwilder/nginx-proxy</h3>
<p>If you are using nginx-proxy from Jason Wilder, simply put a file named <code>${VIRTUAL_HOST}_location</code> containing these directives:</p>
<pre><code class="language-shell">rewrite ^\/category\/(.*) https://blog.florianlopes.io/tag/$1/ permanent;
rewrite ^\/sitemap_index.xml https://blog.florianlopes.io/sitemap.xml permanent;
rewrite ^\/post_tag-sitemap.xml https://blog.florianlopes.io/sitemap-tags.xml permanent;
</code></pre>
<p>Given your virtual host is named <code>yourblog.com</code>, put these directives in a <code>yourblog.com_location</code> file.</p>
<p>Last step, share this file with the nginx-proxy Docker container:</p>
<pre><code class="language-shell">docker run -d -p 80:80 -p 443:443 -v /path/to/vhost:/etc/nginx/vhost.d:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
</code></pre>
<h2 id="someresourcesforghost">Some resources for Ghost</h2>
<p><a href="https://www.ghostforbeginners.com/" target="_blank">https://www.ghostforbeginners.com/</a><br>
<a href="https://help.ghost.org/hc/en-us/articles/225093168-Migrating-From-WordPress-to-Ghost" target="_blank">https://help.ghost.org/hc/en-us/articles/225093168-Migrating-From-WordPress-to-Ghost</a><br>
My <a href="https://www.inkdrop.app/?r=Ysb2B8WUq" target="_blank">note-taking</a> app.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[A tool for Spring MockMvcRequestBuilder to post form objects easily]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h1 id="testingspringmvcformvalidationsusingatoolformockmvcrequestbuilder">Testing Spring MVC form validations using a tool for MockMvcRequestBuilder</h1>
<p>Spring MVC Test Framework is great to test controllers without even running a Servlet container. However, it’s not always straightforward when dealing with form validation, especially when your forms have a lot of properties. This post will show you</p>]]></description><link>https://blog.florianlopes.io/tool-for-spring-mockmvcrequestbuilder-forms-tests/</link><guid isPermaLink="false">5eda297e26dda700015ae70d</guid><category><![CDATA[java]]></category><category><![CDATA[Spring MVC]]></category><category><![CDATA[Spring]]></category><dc:creator><![CDATA[Florian Lopes]]></dc:creator><pubDate>Tue, 25 Oct 2016 07:00:25 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="testingspringmvcformvalidationsusingatoolformockmvcrequestbuilder">Testing Spring MVC form validations using a tool for MockMvcRequestBuilder</h1>
<p>Spring MVC Test Framework is great to test controllers without even running a Servlet container. However, it’s not always straightforward when dealing with form validation, especially when your forms have a lot of properties. This post will show you how to test your form validations easily using a tool that allows Spring's <code>MockMvcRequestBuilder</code> to post an entire form object to a controller.</p>
<h2 id="validatingformswithvalidannotation">Validating forms with @Valid annotation</h2>
<p><em>A quick recap about JSR-303 support in Spring MVC.</em></p>
<p>The <code>@Valid</code> annotation tells Spring MVC to trigger validation on the annotated bean once a request is made:</p>
<pre><code class="language-java">    @PostMapping(&quot;/add&quot;)
    public String addUser(@Valid AddUserForm addUserForm, BindingResult bindingResult, RedirectAttributes redirectAttributes) {
        if (bindingResult.hasErrors()) {
            return ADD_USER_VIEW;
        } else { // Save new user 
            redirectAttributes.addFlashAttribute(&quot;flash&quot;, &quot;User added&quot;);
            return &quot;redirect:&quot; + ADD_USER_URL;
        }
    }
</code></pre>
<pre><code class="language-java">public class AddUserForm {
    @NotNull
    @Size(min = 1)
    private List firstNames;
    @NotNull
    @Size(min = 3)
    private String name;
    @NotNull
    private LocalDate birthDate;
    @NotNull
    @Valid
    private Address address;
    @NotNull
    @Size(min = 1)
    private String[] hobbies;
    @NotNull
    private Gender gender;
}
</code></pre>
<h2 id="submittingformobjectswithmockmvc">Submitting form objects with MockMvc</h2>
<p>Although <code>MockMvc</code> is a useful tool, it is not so convenient when dealing with huge forms submission. To test a form containing a lot of fields, you have to map each one to an HTTP parameter like this:</p>
<pre><code class="language-java">this.mockMvc.perform(MockMvcRequestBuilders.post(&quot;url&quot;)
    .param(&quot;field&quot;, &quot;fieldValue&quot;)
    .param(&quot;field2.nestedField&quot;, &quot;nestedFieldValue&quot;);
</code></pre>
<p>This method can be used if the form doesn’t contain too many fields (or nested ones!). However, it makes things more complicated as it’s error prone (field name, missing field, etc.). It’s also repetitive if you have multiples forms validations to test.</p>
<p>This is why I built a tool for <code>MockMVCRequestBuilder</code> (<a href="https://github.com/f-lopes/spring-mvc-test-utils" target="_blank">https://github.com/f-lopes/spring-mvc-test-utils</a>).</p>
<p><em>Note: A better approach would be to reduce the number of fields in your form. If you can’t do it for some reasons, you could still use this tool.</em></p>
<h2 id="sendingformobjectsusingacustommockmvcrequestbuilder">Sending form objects using a custom MockMvcRequestBuilder</h2>
<p>This tool allows sending an entire form object using <code>MockMvcRequestBuilder</code>.</p>
<p>The usage is straightforward:</p>
<pre><code class="language-java">final AddUserForm addUserForm = new AddUserForm(Arrays.asList(&quot;John&quot;, &quot;Jack&quot;), &quot;Doe&quot;,
        LocalDate.now(), new Address(1, &quot;Amber&quot;, &quot;New York&quot;));
this.mockMvc.perform(MockMvcRequestBuilderUtils.postForm(&quot;/users/add&quot;, addUserForm))
	.andExpect(MockMvcResultMatchers.model().attributeErrorCount(&quot;addUserForm&quot;, 1));
</code></pre>
<p>This builder finds fields using reflection and pass them to the request as HTTP parameters:</p>
<pre><code>firstNames[0]=John 
firstNames[1]=Jack 
lastName=Doe 
address.streetNumber=1 
address.street=Amber 
address.city=New York
</code></pre>
<p>It also supports property editors to format the fields the way you want:</p>
<pre><code class="language-java">MockMvcRequestBuilderUtils.registerPropertyEditor(LocalDate.class, new CustomLocalDatePropertyEditor(&quot;dd/mm/yyyy&quot;);
</code></pre>
<h2 id="getthetool">Get the tool</h2>
<p>Add these lines to your <code>pom.xml</code>:</p>
<pre><code class="language-xml">&lt;dependency&gt;
    &lt;groupId&gt;io.florianlopes&lt;/groupId&gt;
    &lt;artifactId&gt;spring-mvc-test-utils&lt;/artifactId&gt;
    &lt;version&gt;2.2.1&lt;/version&gt;
&lt;/dependency&gt;
</code></pre>
<p>See the documentation for more info: <a href="https://github.com/f-lopes/spring-mvc-test-utils" target="_blank">https://github.com/f-lopes/spring-mvc-test-utils</a>.<br>
The example code for this post is available here: <a href="https://github.com/f-lopes/spring-mvc-form-validation-tests" target="_blank">https://github.com/f-lopes/spring-mvc-form-validation-tests</a>.</p>
<a href="http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=12728585" style="display:none;" target="_blank" rel="tag">
<!--kg-card-end: markdown--></a>]]></content:encoded></item><item><title><![CDATA[5 tips to reduce Docker image size]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h1 id="5tipstoreducedockerimagesize">5 tips to reduce Docker image size</h1>
<p><a href="https://blog.florianlopes.io/content/images/2016/04/Docker-layers.png"><img src="https://blog.florianlopes.io/content/images/2016/04/Docker-layers.png" alt="Docker layers"></a></p>
<p><em>Docker images can quickly weight 1 or more GB.  Although the gigabyte price is decreasing, keeping your Docker images light will bring some benefits. This post will give you 5 tips to help reduce your Docker images size and why focusing on it</em></p>]]></description><link>https://blog.florianlopes.io/5-tips-to-reduce-docker-image-size/</link><guid isPermaLink="false">5eda297e26dda700015ae70c</guid><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Florian Lopes]]></dc:creator><pubDate>Wed, 07 Sep 2016 05:12:01 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="5tipstoreducedockerimagesize">5 tips to reduce Docker image size</h1>
<p><a href="https://blog.florianlopes.io/content/images/2016/04/Docker-layers.png"><img src="https://blog.florianlopes.io/content/images/2016/04/Docker-layers.png" alt="Docker layers"></a></p>
<p><em>Docker images can quickly weight 1 or more GB.  Although the gigabyte price is decreasing, keeping your Docker images light will bring some benefits. This post will give you 5 tips to help reduce your Docker images size and why focusing on it is important.</em></p>
<p><em><strong>Update:</strong> Docker 1.13 introduced a new –squash option to squash the image layers (experimental): <a href="https://docs.docker.com/engine/reference/commandline/build/#/squash-an-images-layers---squash-experimental-only" target="_blank">https://docs.docker.com/engine/reference/commandline/build/#/squash-an-images-layers—squash-experimental-only</a> (thanks <a href="https://twitter.com/SISheogorath" target="_blank">@SISheogorath</a>).</em></p>
<h2 id="whyistheimagesizesoimportant">Why is the image size so important?</h2>
<p>Reducing Docker final image size will eventually lead to:</p>
<ul>
<li>Reduced build time</li>
<li>Reduced disk usage</li>
<li>Reduced download time</li>
<li>Better security due to smaller footprint</li>
<li>Faster deployments</li>
</ul>
<h2 id="whatisalayer">What is a layer?</h2>
<p>To reduce an image size, it’s important to understand what a layer is.<br>
Every Docker image is composed of multiple intermediate images (layers) which form the final image. This layers stack allows Docker to reuse images when a similar instruction is found.</p>
<p>Each Dockerfile instruction creates a layer at build time:</p>
<pre><code class="language-Dockerfile">FROM ubuntu                  # This base image is already composed of X layers (4 at the time of writing)
MAINTAINER Florian Lopes     # One layer
RUN mkdir -p /some/dir       # One layer
RUN apt-get install -y curl  # One layer
</code></pre>
<p></p>
<p><img src="https://blog.florianlopes.io/content/images/2016/04/Docker-layers.png" alt="Docker layers - 5 tips to reduce Docker image size"></p>
<p>Let’s build this image:</p>
<pre><code class="language-shell">$ docker build -t curl .
[...]

$ docker images curl
REPOSITORY            TAG            IMAGE ID            CREATED            VIRTUAL SIZE
test                  latest         732afd2af5a9        About an hour ago  199.3 MB
</code></pre>
<p>To see the intermediate layers of an image, type the following command:</p>
<pre><code class="language-shell">$ docker history curl
IMAGE               CREATED             CREATED BY                                      SIZE
732afd2af5a9        About an hour ago   /bin/sh -c apt-get install -y curl              11.32 MB
912b76f3dd8e        About an hour ago   /bin/sh -c mkdir -p /some/dir                   0 B
525804109d88        About an hour ago   /bin/sh -c #(nop) MAINTAINER Florian Lopes      0 B
c88b54fedc4f        9 days ago          /bin/sh -c #(nop) CMD [&quot;/bin/bash&quot;]             0 B
44802199e669        9 days ago          /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/   1.895 kB
74a2c71e6050        9 days ago          /bin/sh -c set -xe                                                  &amp;&amp; echo '#!/bin/sh' &gt; /u   194.5 kB
140d9fb3c81c        9 days ago          /bin/sh -c #(nop) ADD file:ed7184ebed5263e677   187.8 MB
</code></pre>
<p>You can see below that each layer have a size and a command associated to create it. The final image built from this Dockerfile contains 3 layers plus all Ubuntu image layers.</p>
<p></p>
<p>Although this can be somehow difficult to understand, this structure is very important as it allows Docker to cache layers to make the builds much faster. When building an image, the Docker daemon will check if the intermediate image (layer created by the instruction) already exists in its cache to reuse it. If the intermediate layer is not found or has changed, the Docker daemon will pull or rebuild it.</p>
<h2 id="howtoreduceimagesize">How to reduce image size</h2>
<p>As we just saw, the layers play an important role in the final image size. To reduce the final size, we have to focus on the intermediate layers.<br>
Although some of them cannot be reduced (especially the one you start from), we can use a few tips to help reduce the final image size.</p>
<h3 id="groupcommandsinoneinstructionwhenpossible">Group commands in <strong>ONE</strong> instruction when possible</h3>
<p>Do not perform multiple installs in multiple <em><strong>RUN</strong></em> instructions. Let's compare multiple and single instructions by installing/removing packages :</p>
<h4 id="installingpackages">Installing packages</h4>
<h5 id="separateinstructions">Separate instructions</h5>
<p>To illustrate this statement, let’s build an image with two separate <em><strong>RUN</strong></em> instructions which install <code>curl</code> and <code>mysql-client</code> packages:</p>
<pre><code class="language-Dockerfile">FROM ubuntu:16.04

MAINTAINER Florian Lopes

RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y mysql-client
</code></pre>
<pre><code class="language-shell">$ docker build  -t tip1 .
[...]
$ docker images tip1 
 REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
 tip1                latest              7e9105c27586        3 minutes ago       248.4 MB
</code></pre>
<h5 id="singleinstruction">Single instruction</h5>
<p>Now, let’s gather the two instructions in only one:</p>
<pre><code class="language-Dockerfile">FROM ubuntu:16.04

MAINTAINER Florian Lopes
RUN apt-get update &amp;&amp; apt-get install -y curl mysql-client
</code></pre>
<p>Let’s build our image again:</p>
<pre><code class="language-Dockerfile">$ docker build  -t tip1 .
[...]
$ docker images tip1 
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
tip1                latest              2886d17dc7f4        9 seconds ago       248 MB
</code></pre>
<p>Although the size difference is not so significant, you can expect better results when installing multiple packages.</p>
<h4 id="removingpackages">Removing packages</h4>
<h5 id="separateinstructions">Separate instructions</h5>
<p>Let’s see another interesting example in which we remove a <strong>temporary</strong> package in a separate instruction:</p>
<pre><code class="language-Dockerfile">FROM ubuntu:16.04                    
MAINTAINER Florian Lopes          
RUN apt-get update &amp;&amp; apt-get install -y curl &amp;&amp; curl http://[...]
RUN apt-get remove -y curl
</code></pre>
<p>You can see here that the <code>curl</code> package is immediately removed after being installed, in a separate instruction.<br>
Let’s see the final image size:</p>
<pre><code class="language-shell">$ docker build -t tip2 .
[...]
$ docker images tip2 
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE 
tip1                latest              632f4bf8667c        8 seconds ago       182.7 MB
</code></pre>
<h5 id="singleinstruction">Single instruction</h5>
<p>This time, let’s combine these instructions into one line:</p>
<pre><code class="language-Dockerfile">FROM ubuntu:16.04
MAINTAINER Florian Lopes
RUN apt-get update &amp;&amp; apt-get install -y curl &amp;&amp; curl http://[...] &amp;&amp; apt-get remove -y curl
</code></pre>
<pre><code class="language-shell">$ docker build -t tip3 .
[...]
$ docker images tip3
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
tip1                latest              bfea5f186684        11 seconds ago      182.1 MB
</code></pre>
<p>You can see that the size of the image has slighltly reduced. Again, the difference is not very significant here because we only remove one package.</p>
<h4 id="whyisthereadifference">Why is there a difference?</h4>
<p>As we saw earlier, the Docker daemon creates an image for each instruction to execute the associated command. In the separates instructions example, the superposition of all these images creates the final one. Because of this strategy, the <code>mysql-client</code> package is still part of the final image (in the third layer actually) although being removed further.</p>
<p><img src="https://blog.florianlopes.io/content/images/2016/09/Docker-layers-removing-packages-1.png" alt="Docker layers - removing packages in separate instructions" title="Docker layers - removing packages in separate instructions"></p>
<p><img src="https://blog.florianlopes.io/content/images/2016/09/Docker-layers-removing-packages-single-e1472990426334.png" alt="Docker layers - removing packages in a single instruction" title="Docker layers - removing packages in a single instruction"></p>
<h3 id="donotinstallpackagesrecommendationsnoinstallrecommendswheninstallingpackages">Do not install packages recommendations (-<em>–no-install-recommends</em>) when installing packages</h3>
<pre><code class="language-Dockerfile">RUN apt-get update apt-get install -y --no-install-recommends curl
</code></pre>
<h3 id="removenolongerneededpackagesorfilesinthesameinstructionifpossible">Remove  no longer needed packages or files, in the <strong>SAME</strong> instruction if possible</h3>
<h4 id="packagesexample">Packages example:</h4>
<pre><code class="language-Dockerfile">RUN apt-get update &amp;&amp; \ 
apt-get install -y --no-install-recommends curl &amp;&amp; \
curl &lt;a href=&quot;http://download.app.com/install.sh&quot;&gt;http://download.app.com/install.sh&lt;/a&gt; &amp;&amp; \
.install.sh &amp;&amp; apt-get remove -y curl
</code></pre>
<p>In this example, the package <code>curl</code> is only needed to retrieve an install file. Since it is not needed anymore, it can be removed (in the SAME instruction).</p>
<h4 id="filesexample">Files example:</h4>
<pre><code class="language-Dockerfile">RUN wget ${APP_URL} -o /tmp/app/install.sh &amp;&amp; \
./tmp/app/install.sh &amp;&amp; \ rm -rf /tmp/app/ &amp;&amp; \
rm -rf /var/lib/apt/lists/*
</code></pre>
<h3 id="startwithasmallerbaseimage">Start with a smaller base image</h3>
<p>Do you need every Ubuntu (or other base images) packages? If not, you should consider starting with a smaller base image like Alpine (<a href="https://hub.docker.com/_/alpine/" target="_blank">https://hub.docker.com/_/alpine/</a>) which will likely become the base image for all official Docker images (<a href="https://hub.docker.com/_/jenkins/">Jenkins</a>, <a href="https://hub.docker.com/_/maven/">Maven</a>). This base image weights around 5MB whereas Ubuntu one is about 188MB. You can see a great comparison of Docker base images here: <a href="https://www.brianchristner.io/docker-image-base-os-size-comparison/" target="_blank">https://www.brianchristner.io/docker-image-base-os-size-comparison/</a>.</p>
<h3 id="inspectingimagesfromdockerhub">Inspecting images from DockerHub</h3>
<p>To easily inspect a DockerHub image, you can use the MicroBadger service:<a href="https://microbadger.com/" target="_blank">https://microbadger.com/</a>.</p>
<p><img src="https://blog.florianlopes.io/content/images/2016/09/MicroBadger-Ubuntu-image.png" alt="MicroBadger - Ubuntu layers - 5 tips to reduce Docker image size" title="MicroBadger – Ubuntu layers"></p>
<h2 id="tldr">TL;DR</h2>
<ol>
<li>Group commands in <strong>ONE</strong> instruction when possible</li>
<li>Do not install packages recommendations (<em>–no-install-recommends</em>)</li>
<li>Remove  no longer needed packages or files, in the <strong>SAME</strong> instruction</li>
<li>Clean <em>apt-cache</em> after packages installs</li>
<li>Start with a smaller base image: Alpine</li>
</ol>
<p>If you are too busy to focus on reducing your image size, here is a tool you could consider: <a href="https://github.com/jwilder/docker-squash" target="_blank">https://github.com/jwilder/docker-squash</a>.</p>
<a href="http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=12728585" rel="tag" target="_blank" style="display:none;">
<!--kg-card-end: markdown--></a>]]></content:encoded></item><item><title><![CDATA[Run Spring Boot in a Docker container with debug and Spring profiles support]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h1 id="springbootdockerwithdebugandspringprofiles">Spring Boot &amp; Docker with debug and Spring profiles</h1>
<p>Spring Boot and Docker are extremely popular. The Spring Boot adoption now hits 34% according to this survey: <a target="_blank" href="http://www.baeldung.com/java-8-spring-4-and-spring-boot-adoption">http://www.baeldung.com/java-8-spring-4-and-spring-boot-adoption</a>. Docker adoption has more than doubled from 13% to 27% in 2016 according to a <em><strong>RightScale</strong></em> <a href="http://www.rightscale.com/press-releases/rightscale-2016-state-of-the-cloud-report" target="_blank">survey</a>.</p>
<figure style="float:left; margin:1em;">
    <img src="https://blog.florianlopes.io/content/images/2016/10/docker-logo.png" alt="Docker logo">
<figure>
<figure style="float:right; margin:1em;">
    <img src="https://blog.florianlopes.io/content/images/2016/04/spring-boot-project-logo.png" alt="Spring Boot logo - Spring Boot & Docker with debug and Spring profiles">
<figure>
<p>Spring</p></figure></figure></figure></figure>]]></description><link>https://blog.florianlopes.io/spring-boot-docker-debug-spring-profiles/</link><guid isPermaLink="false">5eda297e26dda700015ae70b</guid><category><![CDATA[Docker]]></category><category><![CDATA[Spring Boot]]></category><category><![CDATA[Spring]]></category><dc:creator><![CDATA[Florian Lopes]]></dc:creator><pubDate>Mon, 04 Apr 2016 05:53:50 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="springbootdockerwithdebugandspringprofiles">Spring Boot &amp; Docker with debug and Spring profiles</h1>
<p>Spring Boot and Docker are extremely popular. The Spring Boot adoption now hits 34% according to this survey: <a target="_blank" href="http://www.baeldung.com/java-8-spring-4-and-spring-boot-adoption">http://www.baeldung.com/java-8-spring-4-and-spring-boot-adoption</a>. Docker adoption has more than doubled from 13% to 27% in 2016 according to a <em><strong>RightScale</strong></em> <a href="http://www.rightscale.com/press-releases/rightscale-2016-state-of-the-cloud-report" target="_blank">survey</a>.</p>
<figure style="float:left; margin:1em;">
    <img src="https://blog.florianlopes.io/content/images/2016/10/docker-logo.png" alt="Docker logo">
<figure>
<figure style="float:right; margin:1em;">
    <img src="https://blog.florianlopes.io/content/images/2016/04/spring-boot-project-logo.png" alt="Spring Boot logo - Spring Boot & Docker with debug and Spring profiles">
<figure>
<p>Spring Boot helps application development process while Docker contributes to streamline deployment.</p>
<p>The <strong><a href="https://github.com/f-lopes/spring-boot-docker/">spring-boot-docker</a></strong> image will help you run your Spring Boot applications with Docker, either in development or production stage.</p>
<h2 id="featuresofspringbootdockerimage">Features of spring-boot-docker image</h2>
<p>Although there are many images for this purpose, this one provides two useful features:</p>
<h3 id="easilydebugyourapplication"><strong>Easily debug your application</strong></h3>
<p>Simply set the variable <code>DEBUG</code> to true in <code>docker-compose.yml</code> file and the application will start in debug mode:</p>
<pre><code class="language-yaml">environment: 
- &quot;DEBUG=true&quot;
</code></pre>
<h3 id="runyourapplicationwithdesiredspringprofile"><strong>Run your application with desired Spring profile</strong></h3>
<p>Spring profiles are a useful way to separate applications resources between different stages. To specify a Spring profile your application should run with, simply set the  <code>SPRING_PROFILES_ACTIVE</code> variable in <code>docker-compose.yml</code>:</p>
<pre><code class="language-yaml">environment:
 - &quot;SPRING_PROFILES_ACTIVE=dev&quot;
</code></pre>
<p>To run the application within the Docker container, simply place your executable jar into the <code>assets</code> directory, renamed as follows: <code>spring-boot-application.jar</code>.</p>
<p>Then launch the container:</p>
<pre><code class="language-bash">docker-compose up -d
</code></pre>
<p><strong>More instructions are available here:</strong> <a href="https://github.com/f-lopes/spring-boot-docker" target="_blank">https://github.com/f-lopes/spring-boot-docker</a><br>
<a href="http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=12728585" target="_blank" rel="tag" style="display:none;"></a></p>
<!--kg-card-end: markdown--></figure></figure></figure></figure>]]></content:encoded></item><item><title><![CDATA[Host multiple websites on a single host with Docker]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h1 id="hostmultiplesubdomainsapplicationsonasinglehostusingdocker">Host multiple subdomains/applications on a single host using Docker</h1>
<p><a href="https://blog.florianlopes.io/content/images/2016/03/Docker-host-multiple-subdomains-1.png"><img src="https://blog.florianlopes.io/content/images/2016/03/Docker-host-multiple-subdomains-1.png" alt="Docker - host multiple subdomains"></a></p>
<p><em>Docker becomes more and more suitable for personal environments, especially with private servers, which can be migrated very often.</em></p>
<p><em>A developer usually have more than one app living on his own private server such as a blog, some development apps</em></p>]]></description><link>https://blog.florianlopes.io/host-multiple-websites-on-single-host-docker/</link><guid isPermaLink="false">5eda297e26dda700015ae70a</guid><category><![CDATA[Docker]]></category><category><![CDATA[domains]]></category><category><![CDATA[Nginx]]></category><dc:creator><![CDATA[Florian Lopes]]></dc:creator><pubDate>Tue, 08 Mar 2016 06:26:28 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="hostmultiplesubdomainsapplicationsonasinglehostusingdocker">Host multiple subdomains/applications on a single host using Docker</h1>
<p><a href="https://blog.florianlopes.io/content/images/2016/03/Docker-host-multiple-subdomains-1.png"><img src="https://blog.florianlopes.io/content/images/2016/03/Docker-host-multiple-subdomains-1.png" alt="Docker - host multiple subdomains"></a></p>
<p><em>Docker becomes more and more suitable for personal environments, especially with private servers, which can be migrated very often.</em></p>
<p><em>A developer usually have more than one app living on his own private server such as a blog, some development apps like Jenkins, GitLab and so on. These apps are likely to be using the standard web port 80. As this port is already bound to your main site for example, your Docker instances will not be accessible throughout this one.</em></p>
<p>This post will show you one way to host multiples applications such as a blog, a personal website and many others, on a single host using Docker containers.</p>
<h2 id="targetarchitecture">Target architecture</h2>
<p>The ideal architecture for hosting multiples apps within a dedicated server would be to expose each application on port 80 through a specific sub-domain (<em>blog.domain.com</em>, <em>jenkins.domain.com</em>, <em>gitlab.domain.com</em>).</p>
<h2 id="usingnginxasareverseproxy">Using Nginx as a reverse-proxy</h2>
<p>These requirements can be achieved using a proxy (also called <em>reverse-proxy</em>). Here is a diagram:</p>
<p><a href="https://blog.florianlopes.io/content/images/2016/03/Docker-host-multiple-subdomains-Nginx-version-1.png"><img src="https://blog.florianlopes.io/content/images/2016/03/Docker-host-multiple-subdomains-Nginx-version-1.png" alt="Docker - host multiple subdomains - Nginx version"></a></p>
<p></p>
<p>This classic architecture could be implemented using Nginx as a <em>reverse-proxy</em>, but this solution comes with inconvenients:</p>
<ul>
<li>necessity to write a configuration file per application/container to proxy</li>
<li>reload Nginx each time an application or a container is added to the architecture.</li>
</ul>
<h2 id="usingnginxproxyfromjasonwilder">Using nginx-proxy from Jason Wilder</h2>
<p><a href="https://github.com/jwilder/nginx-proxy" target="_blank"><em>Nginx-proxy</em> </a> consists in a simple Nginx server and  <a href="https://github.com/jwilder/docker-gen" target="_blank"><em>docker-gen</em></a>. <em>Docker-gen</em> is a small tool written in Go which can be used to generate Nginx/HAProxy configuration files using Docker containers meta-data (obtained via the Docker API).</p>
<p>These two applications are running as a Docker container and so are easy to get up running. Once started, <em>nginx-proxy</em> will act as a reverse proxy between your host and all your sub-domains (<em>blog.domain.com</em>, <em>jenkins.domain.com</em>, etc.), effectively routing incoming requests using the <em><strong>VIRTUAL_HOST</strong></em> environment variable (if set, for each Docker containers).</p>
<p>To proxy a Docker container, you basically have to expose the port the applications uses (for example 80 for WordPress) and add the <em><strong>VIRTUAL_HOST</strong></em> environment variable to the container:</p>
<p>Using docker run command:<br>
<code>docker run -d --expose 80 -e VIRTUAL_HOST=blog.domain.com wordpress</code></p>
<p>Via docker-compose.yml file:</p>
<pre><code class="language-yaml">wordpress:
  image: wordpress
  links:
    - db:mysql
  expose:
    - 80
  environment:
    - &quot;VIRTUAL_HOST=blog.domain.com&quot;
db:
  image: mariadb
  environment:
    MYSQL_ROOT_PASSWORD: example
</code></pre>
<p>The following configuration could be represented like this:</p>
<p><a href="https://blog.florianlopes.io/content/images/2016/03/Docker-host-multiple-subdomains-1.png"><img src="https://blog.florianlopes.io/content/images/2016/03/Docker-host-multiple-subdomains-1.png" alt="Docker - host multiple subdomains"></a><br>
As you can see above, the Nginx-proxy listens on http standard port (80) and forward incoming requests to the appropriate container. We will see later how this routing is made.</p>
<h2 id="startingnginxproxy">Starting Nginx-proxy</h2>
<p>To start the <em>nginx-proxy</em>, type the following command:</p>
<p><code>shell docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy</code></p>
<p>Using docker-compose syntax:</p>
<pre><code class="language-yaml">nginx-proxy:
  image: jwilder/nginx-proxy
  ports:
    - &quot;80:80&quot;
  volumes:
    - /var/run/docker.sock:/tmp/docker.sock
</code></pre>
<p><em><strong>Update:</strong></em><br>
<em>As Moon suggested in his comment, you could add a piece of extra security to hide Nginx server version using a custom configuration file:</em></p>
<pre><code>server_tokens off;
</code></pre>
<p>To make nginx-proxy use your custom Nginx config file, launch it with this flag:</p>
<p><code>-v /path/to/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro</code></p>
<h2 id="howitworks">How it works ?</h2>
<p>As you can guess with the last command, the <em>nginx-proxy</em> container listens on port 80 and has access to the host Docker socket. By giving the Docker host socket, the <em>nginx-proxy</em> container will be able to receive Docker events (ie. container creations, shutdowns, etc.), and react to them.</p>
<p>At its startup, the nginx-proxy container will look for containers with the <em><strong>VIRTUAL_HOST</strong></em> environment variable set and create appropriate basic Nginx configuration file for each of them. These configuration files will tell Nginx how to forward incoming requests to the underlying containers.</p>
<p>Then, each time a container starts, the <em>nginx-proxy</em> will receive an event and create an appropriate Nginx configuration needed to serve the container application and reload Nginx.</p>
<h3 id="routingrequestsusingvirtual_hostenvironmentvariable">Routing requests using VIRTUAL_HOST environment variable:</h3>
<p><em>Nginx-proxy</em> will route requests to containers according to the <em><strong>VIRTUAL_HOST</strong></em> environment variable of each container. This means that if you want a container to be served with a specific domain or subdomain, you have to launch this one with the desired <em><strong>VIRTUAL_HOST</strong></em> environment variable.</p>
<p>Here is an example:</p>
<pre><code class="language-shell"># Launch WordPress (db part omitted for clarity)
docker run -d --name blog --expose 80 -e VIRTUAL_HOST=blog.domain.com wordpress
</code></pre>
<pre><code class="language-shell"># Launch Jenkins
docker run -d --name jenkinsci --expose 8080 -e VIRTUAL_HOST=jenkins.domain.com -e VIRTUAL_PORT=8080 jenkins
</code></pre>
<p>Again, here is the equivalent configuration for the Jenkins instance, using docker-compose syntax:</p>
<pre><code class="language-yaml">jenkins:
  image: jenkins
  expose:
    - 8080
    - 50000
  environment:
    - &quot;VIRTUAL_HOST=jenkins.domain.com&quot;
    - &quot;VIRTUAL_PORT=8080&quot;
  volumes:
    - &quot;/your/home:/var/jenkins_home&quot;
</code></pre>
<p><strong>Note:</strong> <em>the port used by the application inside the container must be exposed for <strong>nginx-proxy</strong> to see it. If the application exposes multiple ports, you have to tell <strong>nginx-proxy</strong> which port to proxy using the <strong>VIRTUAL_PORT</strong> environment variable.</em></p>
<p>In this example, <em>nginx-proxy</em> will forward all requests matching with <em><strong>blog</strong></em>.<em><strong>domain.com</strong></em> url to the <em><strong>WordPress</strong></em> container.<br>
However, all requests beginning by the url <em><strong>jenkins.domain.com</strong></em> will be forwarded to the <em><strong>Jenkins</strong></em> container.</p>
<p>This tool is really simple and gives great flexibility. It allows running multiple Docker containers in the same dedicated server, without writing much configuration.</p>
<p><em><strong>Tip:</strong></em><br>
<strong>Map a container to multiple domains:</strong><br>
A common requirement is using multiple domains for a given container. To do this, simply add hosts to <em><strong>VIRTUAL_HOST</strong></em> variable like this:<br>
<em><strong>VIRTUAL_HOST=domain.com,www.domain.com,home.domain.com</strong></em></p>
<p>Further documentation can be found at the following url: <a href="https://github.com/jwilder/nginx-proxy" target="_blank">https://github.com/jwilder/nginx-proxy</a>.<a href="http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=12728585" rel="tag" style="display:none;"></a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Access Spring profiles in JSP using custom tag]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h2 id="springmvcaccessspringprofilesinjsp">Spring MVC – Access Spring profiles in JSP</h2>
<p>This post explains how to restrict access to an area, based on active Spring profile. By reading this, you will be able to access Spring profiles in JSP in order to achieve this functionality:</p>
<pre><code class="language-jsp">&lt;qcm:profile value=&quot;dev&quot;&gt;
    &lt;</code></pre>]]></description><link>https://blog.florianlopes.io/access-spring-profiles-with-custom-jsp-tag/</link><guid isPermaLink="false">5eda297e26dda700015ae708</guid><category><![CDATA[Spring MVC]]></category><category><![CDATA[Spring profiles]]></category><category><![CDATA[Spring]]></category><dc:creator><![CDATA[Florian Lopes]]></dc:creator><pubDate>Mon, 21 Dec 2015 19:38:26 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="springmvcaccessspringprofilesinjsp">Spring MVC – Access Spring profiles in JSP</h2>
<p>This post explains how to restrict access to an area, based on active Spring profile. By reading this, you will be able to access Spring profiles in JSP in order to achieve this functionality:</p>
<pre><code class="language-jsp">&lt;qcm:profile value=&quot;dev&quot;&gt;
    &lt;form action=&quot;login-as-admin&quot; method=&quot;POST&quot;&gt;
        &lt;input type=&quot;submit&quot; value=&quot;login as admin&quot;/&gt;
    &lt;/form&gt;
&lt;/qcm:profile&gt;
</code></pre>
<h3 id="springprofiles">Spring profiles</h3>
<p>Spring profiles allow to create and run a separate configuration per environment. A common use case is the declaration of multiple data source configuration beans according to the environment (h2 for development purposes, PostgreSQL in production).</p>
<p>To enable any class / method for a given profile, simply annotate the class/bean method with <code>@Profile(&quot;...&quot;)</code>:</p>
<pre><code class="language-java">@Profile(value = {QcmProfile.HEROKU, QcmProfile.PROD})
@Bean
public DataSource dataSource() {
    final HikariDataSource dataSource = new HikariDataSource();
    dataSource.setMaximumPoolSize(properties.getMaximumPoolSize());
    dataSource.setDataSourceClassName(properties.getDataSourceClassName());
    dataSource.setDataSourceProperties(dataSourceProperties());
    return dataSource;
}

@Profile(QcmProfile.TEST)
@Bean
public DataSource testDataSource() {
    final HikariDataSource dataSource = new HikariDataSource();
    dataSource.setMaximumPoolSize(properties.getMaximumPoolSize());
    dataSource.setDataSourceClassName(properties.getDataSourceClassName());
    dataSource.setDataSourceProperties(testDataSourceProperties());
    return dataSource;
}
</code></pre>
<p>You can see the complete configuration class <a href="https://github.com/f-lopes/java-qcm/blob/develop/src/main/java/com/ingesup/java/qcm/config/DataConfig.java" target="_blank">here</a>.</p>
<p>Any component (<code>@Component</code> / <code>@Configuration</code>) annotated with <code>@Profile(&quot;...&quot;)</code> will be loaded if the given profile(s) is/are enabled.<br>
This behavior is achieved by using <code>@Conditional(ProfileCondition.class)</code> on the <code>@Profile</code> annotation itself.</p>
<p>As you can see, the <code>ProfileCondition</code> simply check the given value with current environment profile:</p>
<pre><code class="language-java">/**
 * {@link Condition} that matches based on the value of a {@link Profile @Profile}
 * annotation.
 *
 * @author Chris Beams
 * @author Phillip Webb
 * @author Juergen Hoeller
 * @since 4.0
 */
class ProfileCondition implements Condition {

	@Override
	public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
		if (context.getEnvironment() != null) {
			MultiValueMap&lt;String, Object&gt; attrs = metadata.getAllAnnotationAttributes(Profile.class.getName());
			if (attrs != null) {
				for (Object value : attrs.get(&quot;value&quot;)) {
					if (context.getEnvironment().acceptsProfiles(((String[]) value))) {
						return true;
					}
				}
				return false;
			}
		}
		return true;
	}

}

</code></pre>
<h3 id="usespringprofilesinjsp">Use Spring profiles in JSP</h3>
<p>It may be useful to display a piece of content in a JSP file, based on a specific environment profile.<br>
In my use case, I wanted to display an admin-login button to facilitate tests, only in development phase (development profile).</p>
<p>I found that the best way to achieve this behavior was to develop a custom JSP tag as the Servlet class gives helpers to show/hide a piece of text.</p>
<p>The main concern was to find out how to access Spring profiles inside a tag. Fortunately, Spring provides with a useful tag class: <code>RequestContextAwareTag</code>.</p>
<h3 id="accessspringprofilesinatag">Access Spring profiles in a tag</h3>
<p>To gain access to the Spring context, you need your tag to extend <strong>RequestContextAware</strong> class, which<br>
exposes the current “<em><strong>RequestContext</strong></em>” according to the JavaDoc:</p>
<pre><code class="language-java">
/**
 * Superclass for all tags that require a {@link RequestContext}.
 *
 * &lt;p&gt;The {@code RequestContext} instance provides easy access
 * to current state like the
 * {@link org.springframework.web.context.WebApplicationContext},
 * the {@link java.util.Locale}, the
 * {@link org.springframework.ui.context.Theme}, etc.
 *
 * &lt;p&gt;Mainly intended for
 * {@link org.springframework.web.servlet.DispatcherServlet} requests;
 * will use fallbacks when used outside {@code DispatcherServlet}.
 *
 * @author Rod Johnson
 * @author Juergen Hoeller
 * @see org.springframework.web.servlet.support.RequestContext
 * @see org.springframework.web.servlet.DispatcherServlet
 */
</code></pre>
<p>This class extends the <code>TagSupport</code> Servlet class and override the main method <code>doStartTag()</code> to inject the <code>RequestContext</code>:</p>
<pre><code class="language-java">	/**
	 * Create and expose the current RequestContext.
	 * Delegates to {@link #doStartTagInternal()} for actual work.
	 * @see #REQUEST_CONTEXT_PAGE_ATTRIBUTE
	 * @see org.springframework.web.servlet.support.JspAwareRequestContext
	 */
	@Override
	public final int doStartTag() throws JspException {
		try {
			this.requestContext = (RequestContext) this.pageContext.getAttribute(REQUEST_CONTEXT_PAGE_ATTRIBUTE);
			if (this.requestContext == null) {
				this.requestContext = new JspAwareRequestContext(this.pageContext);
				this.pageContext.setAttribute(REQUEST_CONTEXT_PAGE_ATTRIBUTE, this.requestContext);
			}
			return doStartTagInternal();
		}
		catch (JspException ex) {
			logger.error(ex.getMessage(), ex);
			throw ex;
		}
		catch (RuntimeException ex) {
			logger.error(ex.getMessage(), ex);
			throw ex;
		}
		catch (Exception ex) {
			logger.error(ex.getMessage(), ex);
			throw new JspTagException(ex.getMessage());
		}
	}
</code></pre>
<p>By extending the Spring <code>RequestContextAwareTag</code> and overriding the <code>doStartTagInternal()</code> method, your tag will have access to the <code>RequestContext</code>, needed to retrieve Spring profiles.</p>
<p>With this context, it’s easy to retrieve environment profiles:</p>
<pre><code class="language-java">public class ProfileConditionTag extends RequestContextAwareTag {
    
    private String profile;

    @Override
    protected int doStartTagInternal() throws Exception {
        final Environment environment = this.getRequestContext().getWebApplicationContext().getEnvironment();
        if (environment != null) {
            final String[] profiles = environment.getActiveProfiles();
            if (ArrayUtils.contains(profiles, this.profile)) {
                return EVAL_BODY_INCLUDE;
            }
        }
        return SKIP_BODY;
    }

    public String getValue() {
        return profile;
    }

    public void setValue(String profile) {
        this.profile = profile;
    }
}
</code></pre>
<h3 id="usage">Usage</h3>
<p>Create the taglib descriptor and place it into the <code>WEB-INF/taglibs/</code> directory:</p>
<pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot; ?&gt;
    &lt;taglib xmlns=&quot;http://java.sun.com/xml/ns/j2ee&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot;http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-jsptaglibrary_2_0.xsd&quot; version=&quot;2.0&quot;&gt;
    &lt;description&gt;Conditional profile Tag&lt;/description&gt;
    &lt;tlib-version&gt;2.1&lt;/tlib-version&gt;
    &lt;short-name&gt;ProfileConditionTag&lt;/short-name&gt;
    &lt;uri&gt;&lt;/uri&gt;
    &lt;tag&gt;
        &lt;name&gt;profile&lt;/name&gt;
        &lt;tag-class&gt;com.ingesup.java.qcm.taglib.ProfileConditionTag&lt;/tag-class&gt;
        &lt;body-content&gt;scriptless&lt;/body-content&gt;
        &lt;attribute&gt;
            &lt;name&gt;value&lt;/name&gt;
            &lt;required&gt;true&lt;/required&gt;
        &lt;/attribute&gt;
    &lt;/tag&gt;
&lt;/taglib&gt;
</code></pre>
<p>Using this tag is pretty straightforward:</p>
<pre><code class="language-jsp">&lt;qcm:profile value=&quot;dev&quot;&gt;
    &lt;form action=&quot;login-as-admin&quot; method=&quot;POST&quot;&gt;
        &lt;input type=&quot;submit&quot; value=&quot;login as admin&quot;/&gt;
    &lt;/form&gt;
&lt;/qcm:profile&gt;
</code></pre>
<p>You can head at <a href="http://qcm-plus-plus.herokuapp.com/" target="_blank">this url</a> and see that the admin-login button doesn’t appear as the active profile for the application is <code>heroku</code>.</p>
<h3 id="applicationcode">Application code</h3>
<p>You can see the whole application code in my <a href="https://github.com/f-lopes/java-qcm" target="_blank">GitHub project</a>.<br>
<a href="http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=12728585" style="display:none;" rel="tag"></a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>