Friday, September 14, 2012

I $.Promise - Part 2 - Chained and Parallel Async Calls

We established in I $.Promise - Part 1 - Why do jQuery Promises exist? that promises solve some conceptual problems created by using a traditional callback function.

Okay, now what?

In this post, I am going to show how using a traditional callback-style approach manifests into real unsolvable scenarios (or at the least ugly/unmaintainable code).  And, of course, how easy it is with Promises instead.

Fetching a model from a server before updating the UI

You've got this model which knows how to fetch its data from the server and populate itself.  You want to fire that off, get some data back, and when that's all done update your UI with that new information.  Pretty simple stuff, but the flow across all these examples looks like this:
  1. Make a new model.
  2. Fetch new data from the server.
  3. Set local properties on the model from inside the model itself.
  4. Update the UI with new information after all properties are set.
First, with callbacks:
var model = function () {
    var self = this;
    self.setProperties = function () {
        // Do some sets with data
    };
    self.fetch = function (callback) {
        $.get('...', function (args) {
            self.setProperties(args);
            callback();
        });
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch(refreshTheUI);

This will work all well and good!  Not too bad!  But what if it takes more than one async operation to fully populate this model.  Let's say it's a composite model and needs to make calls to two different end points.  Okay, this will be interesting.  Let's start by following a model of chaining callbacks.

var model = function () {
    var self = this;
    self.setPropertiesFromFetch1 = function (args) {
        // Do some sets
    };
    self.setPropertiesFromFetch2 = function (args) {
        // Do some more sets
    };
    self.fetch = function (callback) {
        // Fire off the request for the first bit of information
        $.get('...', function (args) {
            self.setPropertiesFromFetch1(args);

            // Get the second bit of information
            $.get('...'), function (moreArgs) {
                self.setPropertiesFromFetch2(moreArgs);

                callback();
            }
        });
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch(refreshTheUI);

This is made easier by callback() being within our closure.  If we wanted to extract those anonymous functions out into their own functions on the model, we'd need to be passing callback() along the way.

Beyond that, this is slower than it needs to be as you are chaining rather than running these $.get() calls in parallel.  Okay, let's speed it up, using the simplest way I know how (using callbacks) to ensure that we don't execute our passed-in callback function until it's all complete.

var model = function () {
    var self = this;

    self.hasFetched1 = false;
    self.hasFetched2 = false;

    self.setPropertiesFromFetch1 = function (args) {
        // Do some sets

        self.hasFetched1 = true;
    };
    self.setPropertiesFromFetch2 = function (args) {
        // Do some more sets

        self.hasFetched2 = true;
    };

    self.callIfDone = function (callback) {
        if (self.hasFetched1 && self.hasFetched2) {
            callback();
        }
    }

    self.fetch = function (callback) {
        // Fire off the request for the first bit of information
        $.get('...', function (args) {
            self.setPropertiesFromFetch1(args);

            self.callIfDone(callback);
        });

        // Get the second bit of information (in parallel now!)
        $.get('...'), function (moreArgs) {
            self.setPropertiesFromFetch2(moreArgs);

            self.callIfDone(callback);
        }
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch(refreshTheUI);

Well we've sped things up, but some downfalls of this approach are becoming clear:
  1. Having "hasFetched" properties is ridiculous but necessary under a callback-based approach if you want to block the execution of callback() until the completion of async calls run in parallel.
  2. There's redundancy in the dual execution of callIfDone().
  3. Look at how big our code got!  Imagine if there were 10 child models; you'd have to write some special handler and call it 10 times or do a lot of copy-paste, being sure that callback() gets executed at the right time all the while. 
  4. It's up to the model itself to marshal when to actually execute the callback function given to it.
The complexities of chaining and running asynchronous operations in parallel is quickly overwhelming, not to mention verbose.  I was going to make one more example of how you'd run one operation in series, followed by two in parallel with callbacks only, but I'm not that masochistic.  It's a pain to say the least.
The good news is that this is super simple using a Promise-based approach.  Let's blow this mess up and rewrite it.

A Promise-based Approach

First, let's go back to the simple case of just one call to $.get() and attaching the success handler.

var model = function () {
    var self = this;

    self.setProperties = function (args) {
        // Do some sets
    };

    self.fetch = function () {
        // Fire off the request for the first bit of information
        // Because we attach this done() handler first, we can
        // be assured that it'll get executed before whomever 
        // may add to this function chain later.
        var fetching = $.get('...').done(self.setProperties);
        return fetching;
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch().done(refreshTheUi);

Way simpler.

Note that the model no longer cares about the consumer's callback at all!  The model's concerns are that of populating itself and nothing more--no callback function marshaling! All we need to do is return a promise that's resolved when the model wishes and let whomever gets it attach to it as they will.

In this scenario, our model says that fetch() is complete when $.get() is complete, so we can just return the promise that $.get() gives us straight away. But things aren't always quite so simple.

Running async operations in parallel using $.when()

Let's show how to make this a composite model with multiple $.get() calls just as before, using $.when().

var model = function () {
    var self = this;

    self.setPropertiesFromFetch1 = function (args) {
        // Do some sets
    };
    self.setPropertiesFromFetch2 = function (args) {
        // Do some more sets
    };

    self.fetch = function () {
        var fetching1 = $.get('...').done(self.setPropertiesFromFetch1);
        var fetching2 = $.get('...').done(self.setPropertiesFromFetch2);
        
        // $.when returns a **new** promise
        var fetchingBoth = $.when(fetching1, fetching2);
        return fetchingBoth;
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch().done(refreshTheUi);

$.when() is used when you want a new promise that is only done when both of its children are done (and is rejected if any child gets rejected).  Here are some notes about $.when():
  • $.when() returns a new promise that wraps its inner promises.
  • The promise that it returns is only resolved after all promises given to it are resolved and their done() callbacks have completed.
  • The promise that it returns is rejected if any child is rejected.  Its fail() callbacks are executed only after the inner promise's fail() callbacks are complete.  Other promises are unaffected.
  • You cannot pass an array to $.when().  If you want to do this, you'll need to use $.when.apply($, promiseArr).
  • If you give something that's null, undefined, a plain object or otherwise not a promise, $.when() will treat it as a Promise that's already resolved rather than throwing any kind of error.  So $.when(promise1, null, promise2) equates to $.when(promise1, promise2).  Keep this in mind, as it's burned me when trying to figure out why $.when() was calling its attached done() handlers too early.
Because the promise returned by $.when() doesn't have resolve() called on it until each of its inner promises have had their done() chains complete, we can be assured that when refreshTheUi() gets called all properties have been set via the setProperties() functions.  Awesome.

Note that as of jQuery 1.8 Deferred.Pipe is now an alias for Deferred.Then.  I am leaving the below as-written but in your code replace .pipe for .then if possible.  Recognizing than Deferred.then now serves a dual purpose (as shorthand and as pipe) is important to advanced promise implementations.

Going from parallel calls to chained calls is easy with Deferred.pipe()

$.when() is to "parallel" as Deferred.pipe() is to "serial/chained".  Until recently, the documentation for Deferred.pipe() was exceptionally unclear on its usage, though I'm happy to say it's been made a bit clearer as of this writing.  It talked about filtering (something I find to be a rare use case) and really glossed over the very important fact that Deferred.pipe() returns a new promise, just like $.when().

It's easiest for now to just think of Deferred.pipe()* as a chained alternative to $.when(), but its use slightly differs.

* Note how I called it "Deferred.pipe()"?  This is because it's called on the Promise you want to chain off of and doesn't exist on its own like $.when() does.

Let's put it to use.

var model = function () {
    var self = this;

    self.setPropertiesFromFetch1 = function (args) {
        // Do some sets
    };
    self.setPropertiesFromFetch2 = function (args) {
        // Do some more sets
    };

    self.fetch = function () {
        var fetching1 = $.get('...').done(self.setPropertiesFromFetch1);
        
        // Deferred.pipe also returns a **new** promise
        var fetchingBoth = fetching1.pipe(function () {
            // This part here is only executed once fetching1 returns successful
            // since the first parameter of pipe is the done callback
            var fetching2 = $.get('...').done(self.setPropertiesFromFetch2);

            // Returning fetching2 here will 'pipe' its results into fetchingBoth
            // Therefore, fetchingBoth is only successful when both fetching1
            // and fetching2 are successful.
            return fetching2;
        });
        return fetchingBoth;
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch().done(refreshTheUi);

Here are some notes to keep in mind when using Deferred.pipe():
  • It's called on the promise you want to chain from.
  • It returns a new promise.
  • The promise that pipe() returns' result (whether it's rejected() or resolved()) depends on that of the inner returned promise, if given.
  • Its method signature matches that of Deferred.then(), and although the documentation makes no mention of it as of this writing, as of jQuery 1.8, Deferred.then() is an alias for Deferred.pipe().  They are exactly the same, even though their stated purposes are quite different.

Getting even more complex is simple with Promises

Creating something with exceptionally complex async logic is super simple using Promises.  Remember that "serial then parallel" challenge I mentioned using callbacks?  Well it's as simple as this:

    self.fetch = function () {
        var fetchMeFirst = $.get('...');
        
        // fetchingEverything is a new promise that's successful
        // if the three $.get() calls complete successfully.
        var fetchingEverything = fetchMeFirst.pipe(function () {
            var fetching1 = $.get('...').done(self.setPropertiesFromFetch1);
            var fetching2 = $.get('...').done(self.setPropertiesFromFetch2);

            return $.when(fetching1, fetching2);
        });
        return fetchingEverything;
    };

Promises are freaking awesome.

Wednesday, September 12, 2012

I $.Promise - Part 1 - Why do jQuery Promises exist?

When I first learned about Promises, I couldn't rationalize their existence.  I simply didn't know what they were for.  Sure, I had been told how great they were, but I made due with callbacks and events and never saw a need for anything else.  After having been exposed to this brand new world, I don't know how I managed without them!

In this first post, as an introduction to jQuery Promises, I hope to answer the question as to why Promises exist and solve some very very basic problems traditionally solved by callback functions.

Building up to promises from traditional callbacks

So you've got this thing called a Promise.  Now what the heck is it?  Well, let's start by thinking about the success callback function of $.get().

$.get('...', function () {
    console.log("Data gotten!");
});

Okay, seems well and good.  But now what if you want to execute more than one success function? Maybe the success function is a function that calls two more success functions!  That'll be cool!

var firstSuccessFunction = function () { 
    console.log('Update thing A'); 
};
var secondSuccessFunction = function () { 
    console.log('Update thing B'); 
};

var executeSuccessFunctions = function () {
    firstSuccessFunction();
    secondSuccessFunction();
};

$.get('...', executeSuccessFunctions);

One major problem with this: The authority on what gets executed upon $.get() success is the executeSuccessFunctions function and not whomever actually called $.get()!

In order to take control away from executeSuccessFunctions (because it might not know what we want to do or exist in a different scope all together), we need to give an agnostic execute function to $.get() and let the caller construct what he wants to execute.

var successFunctions = [];
var firstSuccessFunction = function () { 
    console.log('Update thing A'); 
};
var secondSuccessFunction = function () { 
    console.log('Update thing B'); 
};

successFunctions.push(firstSuccessFunction);
successFunctions.push(secondSuccessFunction);

var executeSuccessFunctions = function () {
    for (var i = 0; i < successFunctions.length; i++) {
        successFunctions[i](); // Execute it
    }
}

$.get('...', executeSuccessFunctions);

More unsolved problems

  • Initial appearances might be that you can push a function onto successFunctions after the $.get() line and have it execute on success.  What this would really do is create a race condition: if the $.get() returns after you do this, no problem; however, if it returns before you push a new function onto the successFunctions stack, it'll never get executed.  Net result: we still need to define all of our success callbacks before we actually make our AJAX call.
  • We've written unnecessary plumbing code that'd need to get pasted all over the place for executing each function in an array or whatever other logic might exist.
  • We cannot easily accomodate all possible code paths.  What if the AJAX call itself fails?  Do I then need to construct a new errorFunctions array for storing those?  What if there's overlap and I want to execute firstSuccessFunction under both scenarios?  The pure plumbing required to hook all this up is ginormous.  

Well, Promises solve all of these problems and more via an entirely different convention.

Rather than giving an array of success callbacks to $.ajax() to execute when it's done doing its thing, it delegates that responsibility to a third party object and returns this object immediately to the caller.

Promise-based implementation

Here's our example again using Promises.

// Don't need to give a callback to $.get() as it 
// isn't going to execute our callbacks, on
// AJAX success it's going to tell the Promise
// it returned to execute **its** callbacks
// instead. This call to resolve() by $.ajax() 
// happens when the server responds.

var getting = $.get('...');

var firstSuccessFunction = function () { 
    console.log('Update thing A'); 
};
var secondSuccessFunction = function () { 
    console.log('Update thing B'); 
};

// analogous to successFunctions.push from before
getting.done(firstSuccessFunction); 
getting.done(secondSuccessFunction);


With that tiny bit of code, we've solved all of our stated problems with standard callbacks:
  • The caller of $.get() is the authority of what gets executed upon the success or failure of $.get().
  • We can define and attach a callback function at any time--before or after $.ajax() has returned and is resolved, even in an entirely new scope simply by handing off the Promise object.  Whether the Promise is resolved or not yet is irrelevant: if it is, the function passed to done() will execute immediately.  If it isn't, it'll execute later when the Promise resolved.
  • There's no plumbing code about how to execute things or what callback chain to execute.  That's handled by the Promise itself and the authority over the Promise (in our case $.get) respectively.
  • To create an entirely different chain of callbacks on failure, we just attach new handlers via .fail() instead.  If we want something to execute no matter what, that's when we use .always().

The code shown here is deliberately simple, but I hope when you look at it you do not merely see callback equivalents with a different syntax.  Promises are a whole new paradigm that will significantly change the way you structure your applications.

In short, Promises are awesome.