2014 in Review: A Year of Learning

2014 has been an outstanding year for me, in many ways, but perhaps one of the most important things (besides my family) has been continuing to do what I love. I'm passionate about development, and constantly working to learn new things. This is important for any developer, as our world changes so quickly today. New standards, new languages, new frameworks, it's a consistent onslaught of new ideas, and impossible to learn each and every one, but important to get exposure none-the-less.

The early part of the year I was still maintaining a large legacy application. We were in the final stages of migrating some very old pieces of the application into new framework architecture (FW/1) along with a new look (based on Bootstrap 3). When you're working on legacy applications, there are rarely opportunities to dive in to new things, so that project was a nice nudge to explore areas previously untouched. Back in April, though, I took on a new position that had me doing nothing but non-stop new product development. Not only was this a great switch, but the particular tasks I was given had me working with technologies with which I had little or no exposure, and often without a team peer who could mentor me, as many of the technologies were new for the company as well.

Historically, I'm a server-side programmer. But, over the last several years, I've spent a great deal of time honing my client-side skills. I'm no master, by any means, but I've consistently improved my front-end work over that time, and 2014 built upon that considerably as well.

One area I could get some mentoring in was AngularJS. This was a big shift for me, and while I am still learning more and more every day, it has been an exciting change up for me. Angular is extremely powerful and flexible, taking some hard things and making them dead simple (to be fair, it makes some simple things hard too ;) ). Angular is one of those items I wished I had spent more time with a year or so back, because in hind-sight it could have saved me hundreds of hours of work. I'm really looking forward to working more with Angular.

From a software craftsmanship standpoint, I also had to dive in to a slew of technologies to assist in my day-to-day development. I now use Vagrant to spin up my local dev environment, which is a godsend. One quick command line entry, and I'm up and running with a fully configured environment. I went from playing around with NodeJS to working with it day in and day out, writing my own plugins (or tweaking existing ones), and to using (and writing/tweaking) both Grunt and Gulp task runners for various automation and build tasks. To take something as "source" and convert it to "app" with a simple command line entry is the shiznit. How many hours did I waste building my own sprites, and compiling LESS in some app? Now it happens at the touch of a few keys.

Then there are the deep areas that some project might take you. I had to dust off year's old AS3 skills to refactor a Flash based mic recorder. There was some extreme study into cross-browser client-side event handling. Iron.io has a terrific product for queuing up remote service containers for running small, process intensive jobs in concurrency without taxing your application's resources. That lead into studies in Ruby, shell scripting, and Debian package deployment (not in any sort of order), as well as spinning up NodeJS http servers with Express.

Did you know that you could write automated testing of browser behavior by using a headless page renderer like PhantomJS? Load a page, perform some actions, and record your findings, it really is incredibly powerful. It also has some hidden 'issues' to contend with as well, but it's well worth looking into, as the unit testing applications are excellent. Then you might change direction and checkout everything there is to know about Aspect Ratio. It's something you should know more about, when thinking about resizing images or video.

(Did I also mention that I went from Windows to Mac, on my desktop, and Windows to Linux, on my dev server? Best moves I ever made!)

Speaking of video, I got the opportunity to go beyond the basics with ffmpeg's video transcoding. For those unfamiliar with the process, you write a command line entry defining what you want. Basically it's one command with 200+ possible flags in thousands of possible combinations, and no clear documentation, by any one source, on how to get exactly what you want (read: a lot of reading and a lot of trial and error).

If that had been all of it, it would have been a lot, but then I really got to have fun, and got to rewrite a Chrome Extension. Now, I had the advantage that we already had an extension, but I was tasked with heavily expanding on it's capabilities, and it's been a blast. Going from something relatively simple to something much more complex is always a challenge, but doing so when you don't fully grasp the tech at hand is even more challenging. Google has created a brilliant triple tier architecture for interfacing the browser 'chrome' with the pages inside them, and developing advanced extensions with injected page applications has a lot of twists and turns along the way. I've learned enough along the way that I'm considering writing a presention on this process for the upcoming conference season.

So, in retrospect, I went from maintaining a large legacy system to doing cutting edge custom development, learning something new each and every day. Awesome! Now, the downside to this sort of process is that you lose valuable productivity time getting through the learning curve. It's difficult to make hard estimates on tasks that no one else has done before, and measuring success is taken in baby steps. But the upside is that you are constantly engaged, constantly motivated, and those skills will eventually translate into better products down the road. Products that won't incur that same learning curve (not to mention the new ideas that can come from exposure to so many different technologies). I can't claim mastery of any of it, yet, but did gain a solid foundation in most of it, and it just makes me hungry for more.

So, if I have one wish for 2015 for you, dear reader, as well as myself, it is only that you continue to learn every day. Maybe not to the levels described above (for I am a bit of a lunatic), but at least take the chance to branch out into one new area of development in the coming year.

Task Queues with IronWorkers

As most of my readers know, I still love ColdFusion. In 15 years of development there are still very few things that I've found that I can't do quickly and easily with CF. That said, ColdFusion isn't always the right tool for the job. I don't mean that ColdFusion can't do the job, only that it's maybe not the best tool. The UI stuff is one great example. The ColdFusion UI stuff was written so that the most novice of developers can spin something up fast. But, anyone who's had to do something a bit more complex quickly finds the limitations behind it's generated UI code. Another is extremely resource intensive processes. Doing image manipulation on one image occasionally isn't such a big deal, but what happens when you have to process several thousand? As powerful as the language constructs are for manipulating these images, the constant conversion of binaries into Java Image Buffer objects, the increasing back and forth within RAM, it begins to bog your server down to the point of a dead crawl.

This image example is just one of many, and when you're writing enterprise level applications you're going to hit these hurdles, and using ColdFusion in these instances is like using a hammer when you need an Xacto blade. This is when you start looking for better options to these processes, that can interoperate with your current ColdFusion services. In looking for just such an option, I was pointed to Iron.io.

Iron.io is a cloud-based set of services that can be run on many of the major clouds. They began with a distributed Message Queueing service (IronMQ) built for handling critical messaging needs for distributed cloud applications. Building upon their queueing abilities, they also created IronWorkers. IronWorkers are async processing task queues. They allow you to define what your process environment needs to look like, your task script to process, and then you can queue up tasks which can asynchronously run in their own independent container environments. Once queued, IronWorker with run X number of tasks asynchronously (X being dependent on the plan level you choose). Each task runs within it's own sandbox, with it's own independent RAM and processor allocation, so that one running process does not affect another. As tasks complete their sandbox is torn down, the queue continues to spin up the remaining tasks on demand, until the task queue is done.

The ease with which they've developed this service is amazing. Your ".worker" file defines your environment. Each instance is a headles Ubuntu system. You can select from a number of "runtime" setups, allowing you to work in the language that you're most comfortable with (Node, Ruby, PHP, Java, etc), as well as picking a specific "stack" if you want to mix and match the setup a bit. Each instance also already comes preloaded with several common Linux Packages (ImageMagick, cURL, SoX, etc). Within your ".worker" file, any additional files, folders, etc that you require can also be defined, including .deb packages.

Once your worker is defined, assets gathered, scripts written, etc., you then create your worker from a simple command line call. Once the "build" is complete, your new task service is ready to be called. This can be done via command line or (perhaps more commonly) via an http call. You can even define a webhook for your worker that can kick off your tasks from Git or elsewhere. You can pass variables to your task as part of it's "payload". This is just a simple HTTP Post, passing in name/value pairs that can be used within your process script. The "payload" is limited to 64k in size, so any files you may need on the fly (such as images to process) should be retrieved from within your process, most likely from somewhere like S3. Your process does it's thing, your script sends a command to exit the process (process.exit()) and it's done, spinning up the next task in the queue.

There really is a lot more to it. You can get as simple or as complex as you are capable of writing. IronWorkers are extremely powerful, and scale beautifully. After several weeks working on multiple processes, I can also say that their support is exceptional. They have HipChat channels setup to assist people, and they've been extremely responsive and helpful. They also maintain sample repositories of many common tasks, in a variety of languages, to help you get started while working in whichever environment you're most comfortable with.