A Note On Environment Variables With Docker

I mentioned in a previous post the three different methods for defining Environment variables for our Docker environment, but I hit a small bit I didn't immediately realize.

You cannot reference those variables directly in your Dockerfile during setup. You can create new Environment Variables in your Dockerfile (hey, method 4), but you can't access those externally defined variable in your Dockerfile process.

Here's the deal. When you run `docker-compose build` is creating the layers of your stack, but not firing off your entrypoints, which is where the meat of your processes are, and the bits that do access the Environment Variables. So, what if, in your Dockerfile, you wanted to define your server timezone. We set a timezone environment variable in a previous post. How can we then pass that to Dockerfile for the `build`?

Arguments. I can define a build argument in my Docker Compose file, and then reference that from Dockerfile during `build`. Improving on that further, I can dynamically set that Argument, in the Docker Compose file, using the Environment Variable I already set. Let's look at a small section of the Docker Compose file, where I define my Argument.

docker-compose.yml

version: "3.3"
services:
    lucee:
        build:
            context: ./lucee
            args:
                - TZ=${TZ}
...

I won't talk about the other bits, but you can see the args section under build where I've defined TZ, and tied it to the Environment Variable we had previously setup with the same name.

Now let's look at how you use the Argument in your Dockerfile.

Dockerfile

...
ARG TZ

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone ...

Now that last line might be different (for setting system timezone) depending on your environment, but this shows you how to properly access the variable in your `build`.

Analyzing Our Docker Compose Infrastructure Requirements

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

So, before we continue I think it's important to layout some of the next steps in what it is I wanted/needed to accomplish. I'm using Docker Compose to define my infrastructure. I started with the database, as that will be used for multiple sites, so that was a no brainer.

But what's next? Well, first let me look at some of my requirements.

  • Database (check)
  • ColdFusion Rendering Engine (for this blog) [Lucee]
  • Multi Context CF Setup (blog and os project sites/pages)
  • Web Server
  • New Photography Site (?)
  • Secure Sites with SSL for Google
  • Auto Backup to S3 (?)

Yeah, I set some stretch goals in there too. But, it's what I wanted, so I got to work.

In my initial implementation on Digital Ocean I used the default lucee4-nginx container. Nginx is a nice, easily configurable web server. And, it worked great for a year, up until Digital Ocean restarted my Droplet while running some necessary security maintenance on their infrastructure. Suddenly, nothing worked.

Whoops.

OK, so this was the first thing I had to figure out. Turned out to be a relatively easy thing to track down. I was using the "latest" container. Lucee updated the lucee4-nginx container version of Tomcat. There were changes to the container's internal pathing that no longer jived with the various settings files I had, so I just had to resolve the pathing issues to get it all straight. I also took the opportunity to go ahead and switch to Lucee 5.2.

Now I was back up and running on my initial configuration, but (as you can see in the list above) I had some new goals I wanted to accomplish. So I sat down and started looking over my other requirements to figure out exactly what I needed. One of the first things I looked into was the SSL certs. I could buy expensive wildcard domain certs, but this is a blog. It creates no direct income. Luckily there's LetsEncrypt. LetsEncrypt is a great little project working to secure the internet, creating a free, automated and open Certificate Authority to distribute, configure and manage SSL certs.

Long story short, my investigation of all of my requirements made me realize that I needed to decouple Lucee from Nginx, putting each in it's own separate container. I'm going to use Nginx as a reverse proxy to multiple containers/services, so decoupling makes the most sense. I'm still keeping things small, because this is all out of pocket, but one of the advantages of Docker Compose is I can define multiple small containers, each handling it's own defined responsibility. In the end it comes down to this.

In the end our containers will look something like this:

  • MariaDb (check)
  • Lucee 5.2 (3 sites)
  • Other (photo site, possibly Ghost)
  • Nginx
  • Docker-Gen (template generator, dependency for...)
  • LetsEncrypt
  • Backup (TBD)

Everyone's configuration changes over time, and this is what I came up with after my latest analysis of my requirements. I've already gone through multiple rounds of attacking each different requirement, and probably haven't finalized yet, but next post we'll step in again and setup our Nginx container and start some configuration.

Adding a MariaDB Database Container

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

Building on our last post, we're going to continue our step-by-step setup be talking more about the database setup. I had decided to use MariaDB for my database. For anyone unfamiliar, MariaDB was a fork of MySQL created by many of MySQL's core development team when Oracle bought MySQL, to maintain an open source alternative. Since this blog was using a MySQL database on the shared hosting platform, I needed something I could now use in our DigitalOcean Droplet.

In that last post I showed you the beginnings of our Docker Compose configuration.

version: "3.3"
services:
  database:
    container_name: mydb
    image: mariadb:latest
    env_file:
      - mariadb.env
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
    networks:
      my-network:
        aliases:
          - mysql
          - mydb
    restart: always

networks: my-network:

I explained the basics of this in the last post, but now let me go into some more depth on the finer points of the MariaDB container itself. First, most of the magic comes by using Environment variables. There are three different ways of handling setting environment variables with Docker Compose. First, you can define environment variables in a .env file at the root of your directory, with variables that would apply to all of your containers. Secondly, you can create specific environment variable files (in this case the mariadb.env file) that you can attach to containers using the env_file configuration attribute, like we did above. And a third way is to add environment variables to a specific container using the environment configuration attribute on a service.

Why so many different ways to do the same thing? Use cases. The .env method is for variables shared across all environments. The env_file method can take multiple files, where you may need to define variables for more than one container and share them to another, but not all, and the environment method is just on that one container. There may even be instances where you use all three methods.

In that vein, let's look at a possible use case for a "global" environment variable. I want to use the same timezone in all of my containers. In my .env file I put the following:

view plain print about
1TIMEZONE=America/Chicago
2TZ=America/Chicago

I applied the same value to two separate keys, because some prebuilt containers look for it one way while others look for it another, but this is a perfect example of a "global" environment variable.

Now we can look at environment variables that are specific to our MariaDB container. Here's where things can get tricky. Some prebuilt containers are fairly well documented, some have no documentation at all, and most lie somewhere in between. The MariaDB container documentation is pretty good, but sometimes you have to dig in to get everything you need. Let's step in.

First, I needed MariaDB to setup the service. To do this right, you have to define the password for the root user. This is something that can go in your container specific environment variables, or the container specific environment variable file.

mariadb.env

view plain print about
1MYSQL_ROOT_PASSWORD=mydbrootuserpw

While this will get the service up and running, it's not enough. I needed by blog database automatically setup by the build, as well as the user that my blog would use to access the database. Luckily, the prebuilt MariaDB container makes this pretty easy as well.

mariadb.env

view plain print about
1MYSQL_DATABASE=databaseiwantmade
2MYSQL_USER=userofthatdb
3MYSQL_PASSWORD=passwordofthatuser

Boom! Without any extra code I created my database and the user I needed. But...

This was just the first step. I now have the service, the database, and the user, but no data. How would I preseed my blog data without manual intervention? Turns out that was fairly simple as well. Though it's barely glossed over in the container documentation, you can provide scripts to fill your database, and more. Remember these lines from the Docker Compose service definition?

  ...
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
  ...

I was binding a local directory to a specific directory in the container. I can place any .sql or .sh file in that directory that I want, and the container will automatically run them in alphabetical order during the start up of the container.

OK. Backup. What? So, the container documentation says you can do this, but it doesn't really tell you how, or go into any kind of depth. So, I went and looked at that containers Dockerfile and found the following near the end:

view plain print about
1ENTRYPOINT ["docker-entrypoint.sh"]

This is a Docker command that says "when you start up, and finish all the setup above me, go ahead and run this script." And, that script is in the GitHub repo for the MariaDB container as well. There's a lot of steps there as it sets up the service, and creates that base database and user for you, and then there's this bit of magic:

docker-entrypoint.sh

view plain print about
1for f in /docker-entrypoint-initdb.d/*; do
2 case "$f" in
3 *.sh) echo "$0: running $f"; . "$f" ;;
4 *.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
5 *.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
6 *) echo "$0: ignoring $f" ;;
7 esac
8 echo
9done

The secret sauce. Now, I don't do a ton of shell scripting, but I am a linguist who's been programming a long time, so I know this is a loop that runs files. It runs shell files, it runs the sql scripts, it'll even run sql scripts that have been zipped up gzip style. Hot Dog!

So, what it tells me is that the files it will automatically process need to be located in a directory /docker-entrypoint-initdb.d, which you see I mapped to a local directory in my Docker Compose service configuration. To try this out, I took my blogcfc.sql file, dropped it into my local sqlscripts mapped directory, and started things up. I was then able to use the command line to log into my container and mysqlshow to verify that not only was the database setup, but that it was loaded with data as well.

But, it gets better. I needed a database for my Examples domain as well. This required another database, another user, and data. Now, I like to keep the .sql script for data, and use a .sh file for setting up the db, user and permissions. I also wanted to put needed details in my mariadb.env file that I'll probably need in another (Lucee) container later.

mariadb.env

view plain print about
1...
2EXAMPLES_DATABASE=dbname
3EXAMPLES_USER=dbuser
4EXAMPLES_PASSWORD=userpw
5...

Then, I created my shell script for setting up the Examples database, and dropped it into that sqlscripts directory.

examples-setup.sh

view plain print about
1#!bin/bash
2
3mysql -uroot -p"${MYSQL_ROOT_PASSWORD}"<<MYSQL_SCRIPT
4CREATE DATABASE IF NOT EXISTS $EXAMPLES_DATABASE;
5CREATE USER '$EXAMPLES_USER'@'%' IDENTIFIED BY '$EXAMPLES_PASSWORD';
6GRANT ALL PRIVILEGES ON $EXAMPLES_DATABASE.* TO '$EXAMPLES_USER'@'%';
7FLUSH PRIVILEGES;
8MYSQL_SCRIPT
9
10echo "$EXAMPLES_DATABASE created"
11echo "$EXAMPLES_USER given permissions"

Drop in an accompanying .sql script to the same directory, to populate the database (remember that all these scripts are run in alphabetical order), and now I have a database service to fulfill my needs. Multiple databases, multiple users, pre-seeded data, we have the whole shebang.

By the way, remember this?

.env

view plain print about
1TIMEZONE=America/Chicago
2TZ=America/Chicago

The MariaDB container took that second variable (TZ) and automatically set the service's timezone for us as well. Snap!

This post covered our first container, in our Docker Compose setup. Next post we'll continue our journey to setup a full environment.

Getting Started With Docker Compose

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

As I mentioned in the last post, it was time to change hosting and I decided to go with DigitalOcean. But first, I had to figure out how to get all of my infrastructure deployed easily. DigitalOcean supports Docker, and I knew I could setup multiple containers easily using Docker Compose. I just had to decide on infrastructure.

Docker Compose allows one to script the setup of multiple containers, tying in all the necessary resources. There are thousands of prebuilt containers available on Docker Hub to choose from, or you can create your own. I knew I was going to have to customize most of my containers, so I chose to create my own, extending some existing containers. To begin with, I knew that I had three core requirements.

  • Lucee - Open Source CFML Engine
  • NGINX - Open Source Web Server/Application Platform
  • MariaDB - Open Source Database Server

Now, I could've used a combined Lucee/NGINX container (Lucee has one of those built already), but I knew that I would use NGINX for other things in the future as well, so thought it best to separate the two.

When setting up my environment, I stepped in piece by piece. I'm going to layout each container in separate posts (as each had it's own hurdles), but here I'll give you some basics. You define your environment in a docker-compose.yml file. Spacing is extremely important in these files, so if you have an issue bringing up your environment spacing will be one of the first things you want to check. Here I'll show a simple configuration for a database server.

version: "3.3"
services:
  database:
    container_name: mydb
    image: mariadb:latest
    env_file:
      - mariadb.env
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
    networks:
      my-network:
        aliases:
          - mysql
          - mydb
    restart: always

networks: my-network:

Here I've defined a network called my-network, and on that network I have a database service in a container called mydb. That container is aliased on the network as mydb and mysql. An alias is a name this container will be called when referenced by other containers. I bound a local folder (sqlscripts) to a folder in the container (docker-entrypoint-initdb.d). I also included a local file that contains the Environment Variables used by the container. This container used the actual mariadb image, but you could easily replace this line to point it to a directory with it's own Dockerfile defining your container (i.e. change 'image: mariadb:latest' to 'build: ./myimagefolder').

Bringing up your containers is simple. First you build your work, then you bring it up. From a terminal prompt:

view plain print about
1> docker-compose build
2> docker-compose up

You can add '-d' to that last command to skip all of the terminal output and drop you at a prompt, but sometimes it's good to see what's happening. To stop it all (when not doing '-d') just do Ctrl-C, otherwise just use 'docker-compose stop' or 'docker-compose down'. Going forward it will probably help to review the Docker Compose Command Line Reference

The Docker Compose File Reference is very extensive, providing a ton of options to work with. Here I'm using the 3.3 version of the file, and it's important to know which one you're using when you look at examples on the web, as options change or become deprecated from version to version.

That's a start to a basic Docker Compose setup. Continuing in the series we'll go over each container individually, and see how our Compose config ties it all together. Until next time...

Adventures in Docker Land

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose.

For many years Full City Hosting hosted my blog for free. Emmet is a great guy, and they had shared CF servers, so it wasn't a big deal.

Fast forward a decade plus, two books, tons of traffic... Hey. And, FC phased out their shared CF servers, and moved to the cloud. Time to move. (For the record, Emmet is still a great guy.)

The first thing to decide was "Where do I host this?" There's a few moving parts here (CF, MySQL, DNS, multiple domains, etc). And there are costs to consider. And learning curve. Every enterprise app I'd supported had been on a Windows Server, and that wasn't going to happen with my blog and examples apps on a budget.

Emmet suggested DigitalOcean. I could host multiple small containers on a small Droplet for about $10 a month. This should be enough to give me exactly what I need to run my few, small domains.

Step 2: Figure out the required infrastructure. Deployment to DigitalOcean is simple with Docker. I could create containers for my web server, my app server, my database, etc. But Adobe Coldfusion costs money, and while I had a license for CF X, Adobe Coldfusion isn't really conducive to containerized deployment either.

Enter Lucee, an open source CFML app server. Not only is it free, they even had prebuilt Docker containers with documentation on how to configure. Couple this with NGINX and MariaDB, and we're cookin' with Crisco.

So, I'm gonna cover how I did all of this, step by step. I found a lot of little traps along the way, but it's been a ride I'll share with you all here. Kick back, strap in, and let me know where I zigged when I should've zagged.