Book Review: “The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win”

About the book

The Phoenix Project is an IT leadership fable, using the story format to demonstrate the growing role of software engineering within businesses today and showing the reader how best to work across disciplines to help their business win. The book follows Bill, an IT manager at Parts Unlimited, who is given the overwhelming task of ensuring the success of the business critical “Project Phoenix”, a project that is massively over budget and very late. With the help of a prospective board member and his mysterious philosophy of “The Three Ways”, Bill slowly unravels the path to success within his organisation.

My thoughts

After finishing the book, I would say that the most pleasant surprise was how enjoyable it was to read, using the fable format effectively to engross the reader in the transformation that takes place at the fictitious company, Parts Unlimited. The prospective board member Erik is a clever addition to the fable, providing a mechanism for the author to deliver more in depth information and insights to the reader. In my eyes, the biggest strength of the fable format is that it enables the book to go beyond simply explaining the key concepts as it showcases how they could be implemented within an organisation and the value you could expect to gain.

Amongst the chaos of server outages and general unawareness of what’s really happening on the ground, Brent quickly emerges as a key character, and is someone I’m sure many IT professionals will quickly recognise – but more on him later. The book goes into depth discussing the importance of the flow of work, likening IT to a manufacturing plant, where work moves from station to station. I liked how Bill’s team quickly got a handle on the tasks, eventually introducing a Kanban board to manage the flow of work. Bill identifies the different types of work (business projects, internal projects, operational changes and unplanned work), and implements strategies to prioritise these appropriately. Coming back to Brent, I really liked the inclusion of his character and found how they were able to share his knowledge across the team and remove him as a bottleneck fascinating. The theory of constraints came into play here – this states that any optimisation made at anywhere other than a constraint is an illusion, and while it may seem obvious, this was really profound for me.

The book discusses the movement within DevOps over the last few years to empower developers to increase deployments, and delves into how this (perhaps counter intuitively) actually improves system reliability by getting rid of risky big bang deployments, and enabling hotfixes to quickly make their way to production. It covers the processes used to enable this, including immutable artefacts, and matching environments – made easier using scripting. Importantly, it also goes into the complementary processes that should be put into place to increase business confidence, ranging from automated functional and non functional testing to critical systems monitoring. It even goes further to discuss advanced techniques such as chaos engineering (used at Netflix), showcasing how Parts Unlimited adopted the “fail fast” culture, introducing random infrastructure errors into their live environment, enabling them to rapidly improve their security and resilience.

Finally, and crucially, the book moves beyond purely IT to discuss how the real key to success is in understanding business goals and figuring out how IT can organise itself to best support these. KPIs (Key Performance Indicators) are covered as an effective technique to set, measure and track progress towards these key business goals. This allows the team to competently assess incoming work and reject or prioritise it accordingly by evaluating it against these KPIs. It also enables the business to assess the success of a team/project and provides transparency to other areas of the organisation. The book explores how IT should look to work across both IT and organisational boundaries, and goes even further to claim that IT is increasingly becoming the core of all large businesses. It states that industries which do not recognise the importance of IT operations and embrace the DevOps culture moving forward will leave themselves vulnerable to being disrupted – a reasonable statement as we see companies like Amazon doing this almost daily.

Closing thoughts

Overall, I would highly recommend this book to anyone working within the realm of IT (operations, development, QA, etc). It’s not only an enjoyable read, but also thoroughly insightful and educational, going beyond the theory with real world examples.

 

Making asynchronous code look synchronous in JavaScript

Why go asynchronous

Asynchronous programming is a great paradigm which offers a key benefit over its synchronous counterpart – non blocking I/O within a single threaded environment. This is achieved by allowing I/O operations such as network requests and reading files from disk to run outside of the normal flow of the program. By doing so this enables responsive user interfaces and highly performant code.

The challenges faced

To people coming from a synchronous language like PHP, the concept of asynchronous programming can seem both foreign and confusing at first, which is understandable. One moment you were programming one line at a time in a nice sequential fashion, the next thing you know you’re skipping entire chunks of code, only to jump back up to those chunks at some time later. Goto anyone? Ok, it’s not that bad.

Then, you have the small matter of callback hell, a name given to the mess you can find yourself in when you have asynchronous callbacks nested within asynchronous callbacks several times deep – before you know it all hell has broken loose.

Promises came along to do away with callback hell, but for all the good they did, they still did not address the issue of code not being readable in a nice sequential fashion.

Generators in ES6

With the advent of ES6, along came a seemingly unrelated paradigm – generators. Generators are a powerful construct, allowing a function to “yield” control along with an (optional) value back to the calling code, which can in turn resume the generator function, passing an (optional) value back in. This process can be repeated indefinitely.

Consider the following function, which is a generator function (note the special syntax), and look at how its called:

function *someGenerator() {
  console.log(5); // 5
  const someVal = yield 7.5;
  console.log(someVal); // 10
  const result = yield someVal * 2;
  console.log(result); // 30
}

const it = someGenerator();
const firstResult = it.next();
console.log(firstResult.value); // 7.5
const secondResult = it.next(10);
console.log(secondResult.value); // 20
result.next(30);

Can you see what’s going on? The first thing to note is that when a generator is called, an iterator is returned. An iterator is an object that knows how to access items from a collection, one item at a time, keeping track of where it is in the collection. From there, we call next on the iterator, passing control over to the generator, and running code up until the first yield statement. At this point, the yielded value is passed to the calling code, along with control. We then call next, passing in a value and with it we pass control back to the generator function. This value is assigned to the variable someVal within the generator. This process of passing values in and out of the generator continues, with console log’s providing a clearer picture of what’s going on.

One thing to note is the de-structuring of value from the result of each call to next on the iterator. This is because the iterator returns an object, containing two key value pairs, done, and value. done represents whether the iterator is complete. value contains the result of the yield statement.

Using generators with promises

This mechanism of passing control out of the generator, then at some time later resuming control should sound familiar – that’s because this is not so different from the way promises work. We call some code, then at some time later we resume control within a thenable block, with the promise result passed in.

It therefore only seems reasonable that we should be able to combine these two paradigms in some way, to provide a promise mechanism that reads synchronously, and we can!

Implementing a full library to do this is beyond the scope of this article, however the basic concepts are:

  • Write a library function that takes one argument (a generator function)
  • Within the provided generator function, each time a promise is encountered, it should be yielded (to the library function)
  • The library function manages the promise fulfillment, and depending on whether it was resolved or rejected passes control and the result back into the generator function using either next or throw
  • Yielded promises should be wrapped in a try catch

For a full working example, check out a bare bones library I wrote earlier in the year called awaiting-async, complete with unit tests providing example scenarios.

How this looks

Using a library such as this (there are plenty of them out there), we can take the following code from this:

const somePromise = Promise.resolve('some value');
somePromise
  .then(res => {
    console.log(res); // some value
  })
  .catch(err => {
    // (Error handling code would go in here)
  });

To this:

const aa = require('awaiting-async');
aa(function *() {
  const somePromise = Promise.resolve('some value');
  try {
    const result = yield somePromise;
    console.log(result); // some value
  } catch (err) {
    // (Error handling code would go in here)
  }
});

And with it, we’ve made asynchronous code look synchronous in JavaScript!

tl;dr

Generator functions can be used in ES6 to make asynchronous code look synchronous.

Getting functional in JS with Ramda

Introduction

Lately, I’ve begun programming in JS using an increasingly functional style, with the help of Ramda (a functional programming library). What does this mean? At its core, this means writing predominantly pure functions, handling side effects and making use of techniques such as currying, partial application and functional composition. You can choose to take it further than this, however, that’s a story for another day.

The pillars of functional programming in JS

Pure functions

One key area of functional programming is the concept of pure functions. A pure function is one that takes an input and returns an output. It does not depend on external system state and it does not have any side effects. Pure functions will for a given input always return the same output, making them predictable and easy to test.

Side effects

It’s worth mentioning that side effects are sometimes unavoidable, and there are different techniques you can adopt to deal with these. But the key objective here is minimising side effects and handling these away from your pure functions.

Currying

One of the key building blocks of functional programming is the technique of currying. This is where you take a polyadic function (one with multiple arguments) and translate it into a sequence of monadic functions (functions that take a single argument). This works by each function returning its result as the argument to the next function in the sequence. This allows you to partially apply functions by fixing a number of arguments. Importantly, this also enables you to compose functions together which I’ll get onto later.

Example of partial application:

// Function for multiplying two numbers together
const multiplyTogether = (x, y) => {
  return x * y;
}
multiplyTogether(2, 5);
// => 10

// Curried multiplication of two numbers
const multiplyTogetherCurried = x => y => {
  return x * y;
}
multiplyTogetherCurried(2)(5);
// => 10

// Partial application used to create double number function
const doubleNumber = multiplyTogetherCurried(2);
doubleNumber(5);
// => 10

Composition

Building on currying and adopting another functional discipline of moving data to be the last argument of your function, you can now begin to make use of functional composition, and this is where things start to get pretty awesome.

With functional composition, you can create a sequence of functions which (after the first in the sequence) must be monadic, where each function feeds its returned value into the next function in the sequence as its argument, returning the result at the end of the sequence. We do this in Ramda using compose. Adopting this style can not only make code easier to reason about but also easier to read and write. In my opinion, where this style really shines is in data transformation, allowing you to break down potentially complex transformations into logical steps. Ramda is a big help here, as although you could simply choose to make use of compose and write your own curried monadic functions, it just so happens that Ramda is a library of super useful, (mostly) curried functions, containing functions for mapping over data, reducing data, omitting data based on keys, flattening and unflattening objects and so much more!

Imperative vs functional

Now that you’ve (hopefully) got a better idea of what functional programming is, the question becomes, is following an imperative style wrong? In my opinion, no. When it comes down to choosing between imperative and functional programming in JS, I believe you have to be pragmatic – whilst functional may be your go to choice, there are times when I believe you have to ask yourself if a simple if else statement will do the job. That said, adopting the discipline of writing pure functions where possible and managing side effects, along with handling data transformations using functional composition, will likely make your life as a developer a lot easier and more enjoyable. It sure has for me!

A worked example using Ramda

I’ve included a worked example of a function which I rewrote from a predominantly imperative style to a functional style, as I felt the function was becoming increasing difficult to reason about and with further anticipated additions, I was concerned it would become increasingly brittle.

Original function:

import R from 'ramda';

const dataMapper = (factFindData) => {
  const obj = {};

  Object.keys(factFindData).forEach(k => {
    if (k === 'retirement__pensions') {
      obj.retirement__pensions = normalizePensions(factFindData);
      return;
    }

    if (k !== 'db_options' && k !== 'health__applicant__high_blood_pressure_details__readings') {
      obj[k] = factFindData[k];
      return;
    }

    if (k === 'health__applicant__high_blood_pressure_details__readings') {
      if (factFindData.health__applicant__high_blood_pressure !== 'no') {
        obj.health__applicant__high_blood_pressure_details__readings = factFindData[k];
      }
    }
  });

  return {
    ...emptyArrays,
    ...R.omit(['_id', 'notes', 'created_at', 'updated_at'], obj),
  };
};

Refactored function:

import R from 'ramda';

const normalizeForEngine = x => ({ ...emptyArrays, ...x });
const omitNonEngineKeys = R.omit(['_id', 'notes', 'created_at', 'updated_at', 'db_options']);

const normalizeBloodPressure =
  R.when(
    x => x.health__applicant__high_blood_pressure === 'no',
    R.omit(['health__applicant__high_blood_pressure_details__readings'])
  );

const mapNormalizedPensions =
  R.mapObjIndexed((v, k, o) => k === 'retirement__pensions' ? normalizePensions(o) : v);

const dataMapper =
  R.compose(
    normalizeForEngine,
    omitNonEngineKeys,
    normalizeBloodPressure,
    mapNormalizedPensions
  );

As you can see, when trying to figure out what the data mapper function is doing in the original function, I have to loop through an object, update and maintain the state of a temporary variable (in my head), in each loop checking against multiple conditions, before then taking this result and sticking in into an object, remembering to remove certain keys.

With the refactored function, at a glance I can say that I’m normalising pensions, then normalising blood pressure, then omitting non engine keys, before finally normalising the data for the engine. Doesn’t that feel easier to reason about? If a new requirement came in to normalise let’s say, cholesterol readings, I would simply slot another curried function in after normalizeBloodPressure called for arguments sake normalizeCholesterol.

Conclusion

Functional programming in JS using Ramda can not only reduce your codebase in size, but it can also increase its readability and testability, and make it easier to reason about.


This article was also featured on the Wealth Wizards Engineering Blog, where you can find lots of great content from our team of engineers.