A Note On Environment Variables With Docker

I mentioned in a previous post the three different methods for defining Environment variables for our Docker environment, but I hit a small bit I didn't immediately realize.

You cannot reference those variables directly in your Dockerfile during setup. You can create new Environment Variables in your Dockerfile (hey, method 4), but you can't access those externally defined variable in your Dockerfile process.

Here's the deal. When you run `docker-compose build` is creating the layers of your stack, but not firing off your entrypoints, which is where the meat of your processes are, and the bits that do access the Environment Variables. So, what if, in your Dockerfile, you wanted to define your server timezone. We set a timezone environment variable in a previous post. How can we then pass that to Dockerfile for the `build`?

Arguments. I can define a build argument in my Docker Compose file, and then reference that from Dockerfile during `build`. Improving on that further, I can dynamically set that Argument, in the Docker Compose file, using the Environment Variable I already set. Let's look at a small section of the Docker Compose file, where I define my Argument.

docker-compose.yml

version: "3.3"
services:
    lucee:
        build:
            context: ./lucee
            args:
                - TZ=${TZ}
...

I won't talk about the other bits, but you can see the args section under build where I've defined TZ, and tied it to the Environment Variable we had previously setup with the same name.

Now let's look at how you use the Argument in your Dockerfile.

Dockerfile

...
ARG TZ

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone ...

Now that last line might be different (for setting system timezone) depending on your environment, but this shows you how to properly access the variable in your `build`.

Analyzing Our Docker Compose Infrastructure Requirements

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

So, before we continue I think it's important to layout some of the next steps in what it is I wanted/needed to accomplish. I'm using Docker Compose to define my infrastructure. I started with the database, as that will be used for multiple sites, so that was a no brainer.

But what's next? Well, first let me look at some of my requirements.

  • Database (check)
  • ColdFusion Rendering Engine (for this blog) [Lucee]
  • Multi Context CF Setup (blog and os project sites/pages)
  • Web Server
  • New Photography Site (?)
  • Secure Sites with SSL for Google
  • Auto Backup to S3 (?)

Yeah, I set some stretch goals in there too. But, it's what I wanted, so I got to work.

In my initial implementation on Digital Ocean I used the default lucee4-nginx container. Nginx is a nice, easily configurable web server. And, it worked great for a year, up until Digital Ocean restarted my Droplet while running some necessary security maintenance on their infrastructure. Suddenly, nothing worked.

Whoops.

OK, so this was the first thing I had to figure out. Turned out to be a relatively easy thing to track down. I was using the "latest" container. Lucee updated the lucee4-nginx container version of Tomcat. There were changes to the container's internal pathing that no longer jived with the various settings files I had, so I just had to resolve the pathing issues to get it all straight. I also took the opportunity to go ahead and switch to Lucee 5.2.

Now I was back up and running on my initial configuration, but (as you can see in the list above) I had some new goals I wanted to accomplish. So I sat down and started looking over my other requirements to figure out exactly what I needed. One of the first things I looked into was the SSL certs. I could buy expensive wildcard domain certs, but this is a blog. It creates no direct income. Luckily there's LetsEncrypt. LetsEncrypt is a great little project working to secure the internet, creating a free, automated and open Certificate Authority to distribute, configure and manage SSL certs.

Long story short, my investigation of all of my requirements made me realize that I needed to decouple Lucee from Nginx, putting each in it's own separate container. I'm going to use Nginx as a reverse proxy to multiple containers/services, so decoupling makes the most sense. I'm still keeping things small, because this is all out of pocket, but one of the advantages of Docker Compose is I can define multiple small containers, each handling it's own defined responsibility. In the end it comes down to this.

In the end our containers will look something like this:

  • MariaDb (check)
  • Lucee 5.2 (3 sites)
  • Other (photo site, possibly Ghost)
  • Nginx
  • Docker-Gen (template generator, dependency for...)
  • LetsEncrypt
  • Backup (TBD)

Everyone's configuration changes over time, and this is what I came up with after my latest analysis of my requirements. I've already gone through multiple rounds of attacking each different requirement, and probably haven't finalized yet, but next post we'll step in again and setup our Nginx container and start some configuration.

Adding a MariaDB Database Container

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

Building on our last post, we're going to continue our step-by-step setup be talking more about the database setup. I had decided to use MariaDB for my database. For anyone unfamiliar, MariaDB was a fork of MySQL created by many of MySQL's core development team when Oracle bought MySQL, to maintain an open source alternative. Since this blog was using a MySQL database on the shared hosting platform, I needed something I could now use in our DigitalOcean Droplet.

In that last post I showed you the beginnings of our Docker Compose configuration.

version: "3.3"
services:
  database:
    container_name: mydb
    image: mariadb:latest
    env_file:
      - mariadb.env
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
    networks:
      my-network:
        aliases:
          - mysql
          - mydb
    restart: always

networks: my-network:

I explained the basics of this in the last post, but now let me go into some more depth on the finer points of the MariaDB container itself. First, most of the magic comes by using Environment variables. There are three different ways of handling setting environment variables with Docker Compose. First, you can define environment variables in a .env file at the root of your directory, with variables that would apply to all of your containers. Secondly, you can create specific environment variable files (in this case the mariadb.env file) that you can attach to containers using the env_file configuration attribute, like we did above. And a third way is to add environment variables to a specific container using the environment configuration attribute on a service.

Why so many different ways to do the same thing? Use cases. The .env method is for variables shared across all environments. The env_file method can take multiple files, where you may need to define variables for more than one container and share them to another, but not all, and the environment method is just on that one container. There may even be instances where you use all three methods.

In that vein, let's look at a possible use case for a "global" environment variable. I want to use the same timezone in all of my containers. In my .env file I put the following:

view plain print about
1TIMEZONE=America/Chicago
2TZ=America/Chicago

I applied the same value to two separate keys, because some prebuilt containers look for it one way while others look for it another, but this is a perfect example of a "global" environment variable.

Now we can look at environment variables that are specific to our MariaDB container. Here's where things can get tricky. Some prebuilt containers are fairly well documented, some have no documentation at all, and most lie somewhere in between. The MariaDB container documentation is pretty good, but sometimes you have to dig in to get everything you need. Let's step in.

First, I needed MariaDB to setup the service. To do this right, you have to define the password for the root user. This is something that can go in your container specific environment variables, or the container specific environment variable file.

mariadb.env

view plain print about
1MYSQL_ROOT_PASSWORD=mydbrootuserpw

While this will get the service up and running, it's not enough. I needed by blog database automatically setup by the build, as well as the user that my blog would use to access the database. Luckily, the prebuilt MariaDB container makes this pretty easy as well.

mariadb.env

view plain print about
1MYSQL_DATABASE=databaseiwantmade
2MYSQL_USER=userofthatdb
3MYSQL_PASSWORD=passwordofthatuser

Boom! Without any extra code I created my database and the user I needed. But...

This was just the first step. I now have the service, the database, and the user, but no data. How would I preseed my blog data without manual intervention? Turns out that was fairly simple as well. Though it's barely glossed over in the container documentation, you can provide scripts to fill your database, and more. Remember these lines from the Docker Compose service definition?

  ...
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
  ...

I was binding a local directory to a specific directory in the container. I can place any .sql or .sh file in that directory that I want, and the container will automatically run them in alphabetical order during the start up of the container.

OK. Backup. What? So, the container documentation says you can do this, but it doesn't really tell you how, or go into any kind of depth. So, I went and looked at that containers Dockerfile and found the following near the end:

view plain print about
1ENTRYPOINT ["docker-entrypoint.sh"]

This is a Docker command that says "when you start up, and finish all the setup above me, go ahead and run this script." And, that script is in the GitHub repo for the MariaDB container as well. There's a lot of steps there as it sets up the service, and creates that base database and user for you, and then there's this bit of magic:

docker-entrypoint.sh

view plain print about
1for f in /docker-entrypoint-initdb.d/*; do
2 case "$f" in
3 *.sh) echo "$0: running $f"; . "$f" ;;
4 *.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
5 *.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
6 *) echo "$0: ignoring $f" ;;
7 esac
8 echo
9done

The secret sauce. Now, I don't do a ton of shell scripting, but I am a linguist who's been programming a long time, so I know this is a loop that runs files. It runs shell files, it runs the sql scripts, it'll even run sql scripts that have been zipped up gzip style. Hot Dog!

So, what it tells me is that the files it will automatically process need to be located in a directory /docker-entrypoint-initdb.d, which you see I mapped to a local directory in my Docker Compose service configuration. To try this out, I took my blogcfc.sql file, dropped it into my local sqlscripts mapped directory, and started things up. I was then able to use the command line to log into my container and mysqlshow to verify that not only was the database setup, but that it was loaded with data as well.

But, it gets better. I needed a database for my Examples domain as well. This required another database, another user, and data. Now, I like to keep the .sql script for data, and use a .sh file for setting up the db, user and permissions. I also wanted to put needed details in my mariadb.env file that I'll probably need in another (Lucee) container later.

mariadb.env

Then, I created my shell script for setting up the Examples database, and dropped it into that sqlscripts directory.

examples-setup.sh

view plain print about
1#!bin/bash
2
3mysql -uroot -p"${MYSQL_ROOT_PASSWORD}"<<MYSQL_SCRIPT
4CREATE DATABASE IF NOT EXISTS $EXAMPLES_DATABASE;
5CREATE USER '$EXAMPLES_USER'@'%' IDENTIFIED BY '$EXAMPLES_PASSWORD';
6GRANT ALL PRIVILEGES ON $EXAMPLES_DATABASE.* TO '$EXAMPLES_USER'@'%';
7FLUSH PRIVILEGES;
8MYSQL_SCRIPT
9
10echo "$EXAMPLES_DATABASE created"
11echo "$EXAMPLES_USER given permissions"

Drop in an accompanying .sql script to the same directory, to populate the database (remember that all these scripts are run in alphabetical order), and now I have a database service to fulfill my needs. Multiple databases, multiple users, pre-seeded data, we have the whole shebang.

By the way, remember this?

.env

view plain print about
1TIMEZONE=America/Chicago
2TZ=America/Chicago

The MariaDB container took that second variable (TZ) and automatically set the service's timezone for us as well. Snap!

This post covered our first container, in our Docker Compose setup. Next post we'll continue our journey to setup a full environment.

Getting Started With Docker Compose

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose. See the bottom of this post for other posts in the series.

As I mentioned in the last post, it was time to change hosting and I decided to go with DigitalOcean. But first, I had to figure out how to get all of my infrastructure deployed easily. DigitalOcean supports Docker, and I knew I could setup multiple containers easily using Docker Compose. I just had to decide on infrastructure.

Docker Compose allows one to script the setup of multiple containers, tying in all the necessary resources. There are thousands of prebuilt containers available on Docker Hub to choose from, or you can create your own. I knew I was going to have to customize most of my containers, so I chose to create my own, extending some existing containers. To begin with, I knew that I had three core requirements.

  • Lucee - Open Source CFML Engine
  • NGINX - Open Source Web Server/Application Platform
  • MariaDB - Open Source Database Server

Now, I could've used a combined Lucee/NGINX container (Lucee has one of those built already), but I knew that I would use NGINX for other things in the future as well, so thought it best to separate the two.

When setting up my environment, I stepped in piece by piece. I'm going to layout each container in separate posts (as each had it's own hurdles), but here I'll give you some basics. You define your environment in a docker-compose.yml file. Spacing is extremely important in these files, so if you have an issue bringing up your environment spacing will be one of the first things you want to check. Here I'll show a simple configuration for a database server.

version: "3.3"
services:
  database:
    container_name: mydb
    image: mariadb:latest
    env_file:
      - mariadb.env
    volumes:
      - type: bind
        source: ./sqlscripts
        target: /docker-entrypoint-initdb.d
    networks:
      my-network:
        aliases:
          - mysql
          - mydb
    restart: always

networks: my-network:

Here I've defined a network called my-network, and on that network I have a database service in a container called mydb. That container is aliased on the network as mydb and mysql. An alias is a name this container will be called when referenced by other containers. I bound a local folder (sqlscripts) to a folder in the container (docker-entrypoint-initdb.d). I also included a local file that contains the Environment Variables used by the container. This container used the actual mariadb image, but you could easily replace this line to point it to a directory with it's own Dockerfile defining your container (i.e. change 'image: mariadb:latest' to 'build: ./myimagefolder').

Bringing up your containers is simple. First you build your work, then you bring it up. From a terminal prompt:

view plain print about
1> docker-compose build
2> docker-compose up

You can add '-d' to that last command to skip all of the terminal output and drop you at a prompt, but sometimes it's good to see what's happening. To stop it all (when not doing '-d') just do Ctrl-C, otherwise just use 'docker-compose stop' or 'docker-compose down'. Going forward it will probably help to review the Docker Compose Command Line Reference

The Docker Compose File Reference is very extensive, providing a ton of options to work with. Here I'm using the 3.3 version of the file, and it's important to know which one you're using when you look at examples on the web, as options change or become deprecated from version to version.

That's a start to a basic Docker Compose setup. Continuing in the series we'll go over each container individually, and see how our Compose config ties it all together. Until next time...

Adventures in Docker Land

This multi-part series goes in depth in converting this site infrastructure to a containerized setup with Docker Compose.

For many years Full City Hosting hosted my blog for free. Emmet is a great guy, and they had shared CF servers, so it wasn't a big deal.

Fast forward a decade plus, two books, tons of traffic... Hey. And, FC phased out their shared CF servers, and moved to the cloud. Time to move. (For the record, Emmet is still a great guy.)

The first thing to decide was "Where do I host this?" There's a few moving parts here (CF, MySQL, DNS, multiple domains, etc). And there are costs to consider. And learning curve. Every enterprise app I'd supported had been on a Windows Server, and that wasn't going to happen with my blog and examples apps on a budget.

Emmet suggested DigitalOcean. I could host multiple small containers on a small Droplet for about $10 a month. This should be enough to give me exactly what I need to run my few, small domains.

Step 2: Figure out the required infrastructure. Deployment to DigitalOcean is simple with Docker. I could create containers for my web server, my app server, my database, etc. But Adobe Coldfusion costs money, and while I had a license for CF X, Adobe Coldfusion isn't really conducive to containerized deployment either.

Enter Lucee, an open source CFML app server. Not only is it free, they even had prebuilt Docker containers with documentation on how to configure. Couple this with NGINX and MariaDB, and we're cookin' with Crisco.

So, I'm gonna cover how I did all of this, step by step. I found a lot of little traps along the way, but it's been a ride I'll share with you all here. Kick back, strap in, and let me know where I zigged when I should've zagged.

2014 in Review: A Year of Learning

2014 has been an outstanding year for me, in many ways, but perhaps one of the most important things (besides my family) has been continuing to do what I love. I'm passionate about development, and constantly working to learn new things. This is important for any developer, as our world changes so quickly today. New standards, new languages, new frameworks, it's a consistent onslaught of new ideas, and impossible to learn each and every one, but important to get exposure none-the-less.

The early part of the year I was still maintaining a large legacy application. We were in the final stages of migrating some very old pieces of the application into new framework architecture (FW/1) along with a new look (based on Bootstrap 3). When you're working on legacy applications, there are rarely opportunities to dive in to new things, so that project was a nice nudge to explore areas previously untouched. Back in April, though, I took on a new position that had me doing nothing but non-stop new product development. Not only was this a great switch, but the particular tasks I was given had me working with technologies with which I had little or no exposure, and often without a team peer who could mentor me, as many of the technologies were new for the company as well.

Historically, I'm a server-side programmer. But, over the last several years, I've spent a great deal of time honing my client-side skills. I'm no master, by any means, but I've consistently improved my front-end work over that time, and 2014 built upon that considerably as well.

One area I could get some mentoring in was AngularJS. This was a big shift for me, and while I am still learning more and more every day, it has been an exciting change up for me. Angular is extremely powerful and flexible, taking some hard things and making them dead simple (to be fair, it makes some simple things hard too ;) ). Angular is one of those items I wished I had spent more time with a year or so back, because in hind-sight it could have saved me hundreds of hours of work. I'm really looking forward to working more with Angular.

From a software craftsmanship standpoint, I also had to dive in to a slew of technologies to assist in my day-to-day development. I now use Vagrant to spin up my local dev environment, which is a godsend. One quick command line entry, and I'm up and running with a fully configured environment. I went from playing around with NodeJS to working with it day in and day out, writing my own plugins (or tweaking existing ones), and to using (and writing/tweaking) both Grunt and Gulp task runners for various automation and build tasks. To take something as "source" and convert it to "app" with a simple command line entry is the shiznit. How many hours did I waste building my own sprites, and compiling LESS in some app? Now it happens at the touch of a few keys.

Then there are the deep areas that some project might take you. I had to dust off year's old AS3 skills to refactor a Flash based mic recorder. There was some extreme study into cross-browser client-side event handling. Iron.io has a terrific product for queuing up remote service containers for running small, process intensive jobs in concurrency without taxing your application's resources. That lead into studies in Ruby, shell scripting, and Debian package deployment (not in any sort of order), as well as spinning up NodeJS http servers with Express.

Did you know that you could write automated testing of browser behavior by using a headless page renderer like PhantomJS? Load a page, perform some actions, and record your findings, it really is incredibly powerful. It also has some hidden 'issues' to contend with as well, but it's well worth looking into, as the unit testing applications are excellent. Then you might change direction and checkout everything there is to know about Aspect Ratio. It's something you should know more about, when thinking about resizing images or video.

(Did I also mention that I went from Windows to Mac, on my desktop, and Windows to Linux, on my dev server? Best moves I ever made!)

Speaking of video, I got the opportunity to go beyond the basics with ffmpeg's video transcoding. For those unfamiliar with the process, you write a command line entry defining what you want. Basically it's one command with 200+ possible flags in thousands of possible combinations, and no clear documentation, by any one source, on how to get exactly what you want (read: a lot of reading and a lot of trial and error).

If that had been all of it, it would have been a lot, but then I really got to have fun, and got to rewrite a Chrome Extension. Now, I had the advantage that we already had an extension, but I was tasked with heavily expanding on it's capabilities, and it's been a blast. Going from something relatively simple to something much more complex is always a challenge, but doing so when you don't fully grasp the tech at hand is even more challenging. Google has created a brilliant triple tier architecture for interfacing the browser 'chrome' with the pages inside them, and developing advanced extensions with injected page applications has a lot of twists and turns along the way. I've learned enough along the way that I'm considering writing a presention on this process for the upcoming conference season.

So, in retrospect, I went from maintaining a large legacy system to doing cutting edge custom development, learning something new each and every day. Awesome! Now, the downside to this sort of process is that you lose valuable productivity time getting through the learning curve. It's difficult to make hard estimates on tasks that no one else has done before, and measuring success is taken in baby steps. But the upside is that you are constantly engaged, constantly motivated, and those skills will eventually translate into better products down the road. Products that won't incur that same learning curve (not to mention the new ideas that can come from exposure to so many different technologies). I can't claim mastery of any of it, yet, but did gain a solid foundation in most of it, and it just makes me hungry for more.

So, if I have one wish for 2015 for you, dear reader, as well as myself, it is only that you continue to learn every day. Maybe not to the levels described above (for I am a bit of a lunatic), but at least take the chance to branch out into one new area of development in the coming year.

New Series: Build A Better ColdFusion App

One common complaint, among opponents of ColdFusion, is the poor code that's out there on the web. I would counter that there's a ton of bad code out there in most any language (I know, I've written some of it), but I do "get it". ColdFusion, as powerful a language as it can be, also has an extremely low barrier of entry. It was created in such a way that anyone who could write HTML could turn around and write an application in ColdFusion (hence the language design decision to use a tag syntax).

Well, just because you can write code in one way, that doesn't necessarily mean that you should. There are thousands upon thousands of blog posts, articles and books out there showing people how to do various things in ColdFusion. Many (not all) are cobbled together rather quickly, offering up functional examples on how to do this or that. And, while awesome that someone bothered to take that time and share that knowledge, they didn't always consider best practices in their examples. After all, it was just a quick example, right? You wouldn't just copy and paste it into your editor. You'd tailor it to your application's needs, right?

Well, no. Many of us have copied some bit of code and just dropped it in without revision. And often it just stays like that, until it causes some sort of issue. To top it all off, chances are you copied and pasted the same bits of code into other areas of your app, or even into new applications, again without revision.

So, I want to start a new series here at Cutter's Crossing. I'm going to call it "Build A Better ColdFusion App", and in each article I want to highlight some small bit of best practice advice, with some code examples that match up accordingly. It won't be a "Tip of the Day" bit or anything, and the articles may come at any time, but they'll all get a common header, and I'll try to group them together so they're easy to find. If you have suggestions for some of these tips, or topic areas you think would be a good fit, please fill out the Contact Form and drop me a note with your suggestions.

By The Way: I'm not perfect. I might put up something and it's hosed too, or you might have suggestions on how to do it differently and/or better. Please, call me on it. We're a community of professionals, and want to put out info of value for any and all.

Are You Running ColdFusion on a Mac?

Yes, Apple is a fantastic company, making really nice products that work very well. If someone gave me one of those nice Macs with the 27" screens today, I'd probably switch. I wouldn't buy one myself. Nice as they are, I can't justify the expense when I can buy a machine twice as powerful for half the money, and actually repair it myself when I need to. But, that's my decision. I choose the imperfections of Windows on a PC architecture as a counter to the affordability and maintainability of the platform. And, it's done really well by me.

I watched Andy Matthews accidentally spill orange juice on to Aaron West's 17" MacBook Pro once, several years ago. Aaron finally got it back from the Mac Store about two weeks later. I can't afford that kind of productivity loss. No thanks.

Again, this is just me. Many others still use a Mac, and love it, and I get it. It's a fantastic operating system, and Apple does make pretty stuff.

But I wouldn't load ColdFusion server on a Mac. Not really. Not on to OS X directly. Yet, I've heard this complaint recently. About how buggy the ColdFusion install is, and what a hassle it is to get up and running.

You're running it on a Mac? On OS X? Why?

The web doesn't run on a Mac. Yes, there are OS X servers, but have you seen that market share? The web runs on *nix and Windows. And, if you're writing code for the web, then your environment needs to match. Load up a VM, and install ColdFusion on the VM, in *nix or Windows. That makes sense.

Is that where you're having trouble? I get CF up in about 20 min on my Windows Server VM, and that includes the download time.

Now, should the ColdFusion server install easily on a Mac? Sure, I think it should. There are way too many developers out there trying to do exactly what I've described, and they're in pain. Go badger the Adobe CF team to get it straight.

But, again, what's your site running on?

You can post comments if you like, but I'm not taking the bait.

Legacy Code Part 17: The Rest of The Story

As developers, sometimes it is hard to see beyond the code. We know code. We like code. Some of us occasionally even dream in code (just ask my wife...) Knowing this, it is sometimes difficult, when troubleshooting performance of Legacy Code, to remember that the issue might not be in code.

One great area, that so many developers overlook, is the ColdFusion server itself. The installer makes it fairly easy to get things up and running, but is the environment truly setup optimally for your application? By default, CF only loads allocating 512kb of RAM to the underlying JVM. 512kb is not much, especially in a high traffic, heavy process system. And have you ever adjusted your Request Tuning settings? Checked your Queue Timeout? What sort of template caching are you using?

Another area to look at is your JVM itself. While you can adjust some of the JVM settings in the CFIDE (in ColdFusion 10) properly, it's still a good idea to review things like your Garbage Collection settings, your RAM allocation, and other bits. Are you loading unnecessary libraries? And, are you still using the installed version of the JVM? It might be worth while to download a new JDK, install, and test your app. For one thing, it'll help keep your underlying Tomcat more secure.

Then there's the hardware and network review. What sort of throughput are you getting from your hard drives? Is your file I/O optimal for the ops you're running? Are you storing data and assets on other network connected systems and, if so, what sort of throughput are you getting from those internal connections.

And then we're back to your database. Hopefully it isn't on the same machine, but the same sort of review applies. Do you have enough RAM? What sort of hard drives are you using? Are you getting enough throughput through that old NIC card?

Some of these things are really easy to address, taking a little time and effort and research. Others may require parts and equipment replacement, maintenance windows and downtime. It's really up to you to balance out what makes sense.

Step one: write a plan.

Step two: follow through.

With these last 17 posts I've written down a lot of little things to help anyone take their Legacy Code (outdated and tired ColdFusion applications) and begin to carry them forward into new life. Some changes are farther reaching than others, and all require careful research, planning, management and testing. To let these applications sit, without thought for modern change, is to waste the time originally spent to create them in the first place. Nothing grows without care and feeding and light.

It is my belief that you can still benefit from the hundreds of thousands man hours of previous time and effort spent on these applications, without having to take drastic moves that could prove unnecessary or even disastrous. ColdFusion was the first web application server, outliving many predecessors, and continues to grow with the times. ColdFusion continues to be a fantastic Rapid Application Development platform, allowing many developers to write applications in a fraction of the time it would take on other platforms, and there's a solid roadmap for future versions already on the board. There are thousands upon thousands of active, thriving, scalable and performant ColdFusion based applications on servers in thousands of top ranking companies in countries all around the world, still, to this day, because of the power available here.

This article is the seventeenth in a series of articles on bringing life back to your legacy ColdFusion applications. Follow along in the Legacy Code category.

Legacy Code Part 16: How You Get Data

One of the greatest culprits in poorly performing ColdFusion applications, Legacy Code or modern, is badly written SQL. ColdFusion was originally written as a middle tier between the web server and the database, giving us the ability to output queried data in html format. In fact CFML was originally DBML. One of it's greatest strengths has always been the ease with which one could get data back from a multitude of datasources, from a variety of platforms, and display it to a user. One of it's greatest weaknesses is that it allows you, the developer, to write those queries just as poorly as you want to as well.

How many queries in your system start with SELECT *? Of the 35 columns in that table, are you really using more than five or ten? And, do you have multiple <cfquery> calls, one after another, with each using data from the query before? Can this be translated into a single query using some well thought out joins? By the way, when's the last time you analyzed your query performances? How about rebuilt your table indexes?

In most applications that I've worked on, Legacy and modern, major bottlenecks occurred due to the poor performance of queries. Applying <cfqueryparam>s helped a little, but truly reviewing each query, running it through a query analyzer, rebuilding and creating new indexes.

Some people, especially writing quick small apps and prototypes, have used ORM frameworks for their database interaction, such as Reactor, Transfer, and now ColdFusion's own built-in implementation of Hibernate via ColdFusion ORM. These are very popular, and great for quickly standing up new product, but they are also very object intensive, and don't necessarily give you deep introspection into what those data transactions are truly doing under the covers. Yes, they can allow you to quickly build new applications, but that doesn't necessarily mean that those applications will scale well, or continue to perform three years later the same way they did on day one.

There was a really good article by Chris Travers on DZone, a few weeks back, defending the choice to continue to hand code your SQL. It's not a very long article, but one paragraph really stood out for me, and is something that I already do.

I find that my overall development time is not slowed down by hand-writing SQL. This remains true even as the software matures. The time-savings of automatic query tools is traded for the fact that one doesn't get to spend time thinking about how to best utilize queries in the application. The fact is that, as application developers, we tend to do a lot in application code that could be done better as part of a query. Sitting down and thinking about how the queries fit into the application is one of the single most productive exercises one can do.

This is something that is very easy for me to identify with, and also a good argument for the (occasional and well thought out) use of stored procedures. Many developers stay away from stored procedures because a) they don't know how to write or use them, and/or b) those procedures aren't stored in code, so it's not as easy to introspect or search for things when you're making changes. While both of these may be valid arguments, in some way, there is the performance trade off. Stored procedures due, typically, perform better, having compiled cached execution plans on the SQL server. If you can overcome your other obstacles (and you can), you can gain from placing complex SQL logic inside of stored procs.

Again, this type of change, in a large Legacy Code system, can be long and arduous. Set a goal, with each bit of maintenance that you do, to review the queries you are using in the requests you are adjusting. Tools like FusionReactor can help you identify slow running queries, for you to target your efforts. It may pay well to hire an outside SQL resource to review your databases, monitor query performance, and provide detailed analysis and suggestions on improvement. A good DBA, even a part timer, can save a Legacy Code application from extinction.

In our next post we'll dive into some of the things that you can do at the Application Server level, to help you get the most out of your Legacy Code.

This article is the sixteenth in a series of articles on bringing life back to your legacy ColdFusion applications. Follow along in the Legacy Code category.

More Entries