Tuesday, September 10, 2013

jQuery deferred resolution and rejection based on boolean value

Do you hate writing this as much as I do?

    var dfd = $.Deferred();
    if (something) {
        dfd.resolve();
    } else {
        dfd.reject();
    }

Well no more!  Enter "cond".

$.Deferred = (function (baseDeferred) {
  var slicer = [].slice;
 
  var cond = function (truthy) {
    return this[truthy ? 'resolve' : 'reject'].apply(this, slicer.call(arguments, 1));
  };
 
  return function() {
    var dfd = baseDeferred.apply(this, arguments);
    dfd.cond = cond;
    return dfd;
  };
 
})($.Deferred);

Unfortunately, as shown by the relative complexity of this code, jQuery doesn't actually use a prototype for the deferred objects it creates, so you have to 'duck punch' the feature in.

Usage (after including the previous script somewhere):

var dfd = $.Deferred();

dfd.cond(true, { some: 'data' });

dfd.then(function (res) {
    console.log(res);
});

Sunday, August 18, 2013

How to create your own EventEmitter in NodeJS in 3 Steps

Note: This post does not seek to explain what the EventEmitter is.  For that, look at Node's site.  I'm going to show you how to make your custom object into one.  This is handy for many reasons.  I recently created a node module which wraps grunt and exposes the output from it as an EventEmitter.


Step 1: Bring in the modules


// events contains the EventEmitter 
// class we will inherit from
var events = require('events');
// and util contains `inherit` which 
// we will use to set up our object's inheritance
var util = require('util');

Step 2: Construct your class, followed by the EventEmitter's

You will want to call the EventEmitter's constructor at the end of your own.  It's standard practice when inheriting from another class that you call into the parent's constructor in order to preserve any functionality within it.  I've excluded it before, and it's not the end of the world, it just sets some default properties.  You can see what the constructor in Node does on Github.

var MyClass = function () {
    // Your class' constructor magic

    // Call the EventEmitter constructor
    events.EventEmitter.call(this);
}

Step 3: Set up the prototypal inheritance

util.inherits(MyClass, events.EventEmitter);

Well that was easy.  What this does is set up the prototype chain for us.  You can read more about the util.inherits again on Node's website.

The Final Code

var util = require("util");
var events = require("events");

function MyClass() {
    // Your class' constructor magic

    events.EventEmitter.call(this);
}

util.inherits(MyClass, events.EventEmitter);

MyClass.prototype.emitTest = function(data) {
    this.emit("data", data);
}

// Consumption

var instance = new MyClass();
instance.on('data', function (msg) {
    console.log(msg);
});

instance.emitTest("Hello World"); //=> Hello World

Tuesday, April 23, 2013

Express REST Controller - Rapid REST APIs

Ever wanted to create a RESTful interface on your Node/Express server that exposes an in-memory Backbone Collection with one line of code?

Okay, so that's probably just me but let me know what you think of this:

Express REST Controller on GitHub

It can be used to create multiple REST APIs on a Backbone Collection like so:

var app = express();
var customerController = new Controller().bind(app, 'customer');
var orderController = new Controller().bind(app, 'order');
var itemController = new Controller().bind(app, 'item');

After executing these lines you can now hit your Node/Express server with full CRUDL at /customer, /order and /item.

I think it's cool.

Monday, April 01, 2013

Making your Backbone objects into promises

So awesome.

I used to return back promises from my views, for example.  Now I just make my entire view implement the Promise API.  In this fashion, whomever creates the view can wait for a result (think modal popup with the possibility of multiple pages of user data being returned back to the base page).


    var MyChildView = Backbone.View.extend({
       dfd: null,

       initialize: function () {
           this.dfd = $.Deferred();
           this.dfd.promise(this); // Make the view a promise

           this.dfd.always(this.remove); // Self cleanup anyone?
       }

       //Later
       method: function () {
           this.dfd.resolve({ foo: 'bar' });
       }
    });

This enables now something like the following.
    var MyParentView = Backbone.View.extend({
       render: function () {
           var childview = new MyChildView();
           childView.then(this.doSomethingElse); // Do something meaningful with the promise
       }
    });
This approach is clearly superior than events or success callbacks when your parent is waiting on the result of its child. It also allows your views to become parent of entire asynchronous workflows orchestrated with promises.

Presentation on Backbone application organization

I recently gave a few talks (Thomson Reuters, Best Buy and at JS MN) about how Backbone JS applications should be organized.  It speaks to common errors, bug-prone approaches, best practices, and just plain general organization.

It's filled with dogma that's meant to set a team straight when it comes to working together and preventing the evil spaghetti monster from creeping into your app.

I'll break it down later and hopefully provide some annotated sample files, but until then here it is.

My presentation on Backbone JS application organization

Tuesday, January 22, 2013

I $.Promise - Part 3 - How to Make Your Own jQuery Promises

In Part 1 and Part 2 I talked about why promises exist and what you can do with them, but the true power of Promises cannot be realized until you start creating them yourself.  They're especially handy for points of user interactions, animations and blended AJAX operations.

The basic form

The basic form follows four basic steps when creating your own promise/Deferred implementation:
  1. Create the Deferred object.  I use the word Deferred here on purpose as this object should be retained internal to your function or object only, it should never be passed to the consumer.  In this way, no one can muck with the resolution or rejection of your promise and keeps your separations clear.
  2. Do some async stuff.  Whether it's an AJAX call, animating or getting feedback from the user.
  3. Resolve or Reject your Deferred.  At some point this thing needs a result.  Your method controls when and with what it's resolved or rejected.
  4. Return a Promise.  A promise is a deferred without the capabilities of resolving or rejecting.
// Totally contrived example of showing an edit form in a modal in Backbone.
function modalEdit(model) {
    // 1. Create the Deferred object.
    var editing = $.Deferred();

    // 2. Do some async stuff.
    var editView = new ModalEditView({ model: model, el: $('#modalSpot') });
    editView.render();

    // 3. Resolve or reject your deferred.
    model.on('sync', function () {
        editing.resolve(modal); // Resolve on save.
    });
    editView.on('cancel', function () {
        editing.reject(); // Reject on dialog cancel.
    });

    // 4. Return a Promise.
    return editing.promise();    
}


Your core Deferred object control functions are:

Deferred.resolve(args) - Executes 'done' handlers, passing args.
Deferred.reject(args) - Executes 'fail' handlers, passing args.
Deferred.notify(args) - Executes 'notify' handlers, passing args.

That's really all you need to get going.  If you'd like to see why you might want to do this, read on!

Putting it to use: An advanced use case

I recently did some work related to displaying a hierarchical "family tree" in SVG, driven by Raphael and Backbone (that's another post).

Prior to displaying the tree, there was a variable series of checks and prompts that had to be performed, all asynchronous.

It's a mess, and don't focus on it too hard, but the logic I came up with to understand this looks like this:

// Call create endpoint (get guid)
    // If NOT Affiliate Tree
        // Call get endpoint with guid
            // Render
    // Else (Affiliate Tree)
        // When done, check to see if user needs to be warned of billing
            // If Needing to warn, Prompt user
                // If User Accepts
                    // Post to bill
                        // Billing success, create tree
                            // Call get endpoint with guid
                                // Render
                // Else (User Declines (do nothing))
            // Else (No warning)
                // Post to bill
                    // Billing success, create tree
                        // Call get endpoint with guid
                            // Render
 
Now, try and think how you'd do this with callbacks.  Okay, now stop because your brain will explode.

Here's what my overridden fetch method on my Backbone model looked like in the end.  Please note:
  1. This method is pure workflow, no implementation details.
  2. This method returns a promise, so as far as calling fetch goes it's exactly the same as a native fetch implementation (minus the args).
  3. This methods represents variable amounts of async events comprised of AJAX operations and user prompts.
  4. There's no callbacks being passed into my business logic functions, meaning what it means to "promptWarnTree" doesn't care about what happens next, that's the job of the workflow to decide.  The "promptWarnTree" function just lets the workflow know when it's done.
  5. If at any point any promise is rejected, the entire flow ends and the tree is not shown.

fetch: function () {
  var orchestrating = createTree().then(function (type) {
   if (type === 'affiliate') {
    return checkWarnTree().then(function (warn) {
     if (warn) {
      return promptWarnTree().then(function () {
       return billTree();
      });
     }
    });
   }
  });
 
  return orchestrating.then(function () {
   return getTree();
  });
 }

I won't walk you through how this logic all works (for that check out my other posts on promises in this series), but the take-away should be how expressive and compact it is.

Specific implementation examples

In the scenario above, some methods are simple wrappers for other models or normal AJAX calls, but some are user interactions, such as asking the user if they want to accept the billing charges:

function promptWarnTree () {
    // I like to name all of my promise vars with "-ing" suffixes.  Makes things read nicely.
    var prompting = $.Deferred();

    // modalConfirm doesn't implement the Promise API
    // params: message, success, failure
    modalConfirm("Are you sure you wish to incur these charges?", prompting.resolve, prompting.reject);

    return prompting.promise();
}

Passing a value to attached callbacks

Notice how checkWarnTree's done callback has a "warn" parameter which tells the workflow of the result?  Passing a value here is easy via Deferred.resolve() and Deferred.reject() argument passing:

function createTree() {
    var creating = $.post('...').then(function (resp) {
        // In reality this was more complex, but the idea here is that creating is not the result of $.post(), it's a trimmed down version of the response or a derivative value.  In this way, the callbacks attached to creating do not get the full response object, merely the result I want them to have.
        return $.Deferred().resolve(resp.warn);
    });
    return creating;
}

Deferred.resolveWith() and Deferred.rejectWith() are also available and are used to pass a context to the callback as well as args.  I use them infrequently.

Closing

Creating your own promises is a great way to separate concerns in your application, make something that foolishly doesn't implement the promise API do so, and build up deep, long or otherwise variable workflows.

Friday, September 14, 2012

I $.Promise - Part 2 - Chained and Parallel Async Calls

We established in I $.Promise - Part 1 - Why do jQuery Promises exist? that promises solve some conceptual problems created by using a traditional callback function.

Okay, now what?

In this post, I am going to show how using a traditional callback-style approach manifests into real unsolvable scenarios (or at the least ugly/unmaintainable code).  And, of course, how easy it is with Promises instead.

Fetching a model from a server before updating the UI

You've got this model which knows how to fetch its data from the server and populate itself.  You want to fire that off, get some data back, and when that's all done update your UI with that new information.  Pretty simple stuff, but the flow across all these examples looks like this:
  1. Make a new model.
  2. Fetch new data from the server.
  3. Set local properties on the model from inside the model itself.
  4. Update the UI with new information after all properties are set.
First, with callbacks:
var model = function () {
    var self = this;
    self.setProperties = function () {
        // Do some sets with data
    };
    self.fetch = function (callback) {
        $.get('...', function (args) {
            self.setProperties(args);
            callback();
        });
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch(refreshTheUI);

This will work all well and good!  Not too bad!  But what if it takes more than one async operation to fully populate this model.  Let's say it's a composite model and needs to make calls to two different end points.  Okay, this will be interesting.  Let's start by following a model of chaining callbacks.

var model = function () {
    var self = this;
    self.setPropertiesFromFetch1 = function (args) {
        // Do some sets
    };
    self.setPropertiesFromFetch2 = function (args) {
        // Do some more sets
    };
    self.fetch = function (callback) {
        // Fire off the request for the first bit of information
        $.get('...', function (args) {
            self.setPropertiesFromFetch1(args);

            // Get the second bit of information
            $.get('...'), function (moreArgs) {
                self.setPropertiesFromFetch2(moreArgs);

                callback();
            }
        });
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch(refreshTheUI);

This is made easier by callback() being within our closure.  If we wanted to extract those anonymous functions out into their own functions on the model, we'd need to be passing callback() along the way.

Beyond that, this is slower than it needs to be as you are chaining rather than running these $.get() calls in parallel.  Okay, let's speed it up, using the simplest way I know how (using callbacks) to ensure that we don't execute our passed-in callback function until it's all complete.

var model = function () {
    var self = this;

    self.hasFetched1 = false;
    self.hasFetched2 = false;

    self.setPropertiesFromFetch1 = function (args) {
        // Do some sets

        self.hasFetched1 = true;
    };
    self.setPropertiesFromFetch2 = function (args) {
        // Do some more sets

        self.hasFetched2 = true;
    };

    self.callIfDone = function (callback) {
        if (self.hasFetched1 && self.hasFetched2) {
            callback();
        }
    }

    self.fetch = function (callback) {
        // Fire off the request for the first bit of information
        $.get('...', function (args) {
            self.setPropertiesFromFetch1(args);

            self.callIfDone(callback);
        });

        // Get the second bit of information (in parallel now!)
        $.get('...'), function (moreArgs) {
            self.setPropertiesFromFetch2(moreArgs);

            self.callIfDone(callback);
        }
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch(refreshTheUI);

Well we've sped things up, but some downfalls of this approach are becoming clear:
  1. Having "hasFetched" properties is ridiculous but necessary under a callback-based approach if you want to block the execution of callback() until the completion of async calls run in parallel.
  2. There's redundancy in the dual execution of callIfDone().
  3. Look at how big our code got!  Imagine if there were 10 child models; you'd have to write some special handler and call it 10 times or do a lot of copy-paste, being sure that callback() gets executed at the right time all the while. 
  4. It's up to the model itself to marshal when to actually execute the callback function given to it.
The complexities of chaining and running asynchronous operations in parallel is quickly overwhelming, not to mention verbose.  I was going to make one more example of how you'd run one operation in series, followed by two in parallel with callbacks only, but I'm not that masochistic.  It's a pain to say the least.
The good news is that this is super simple using a Promise-based approach.  Let's blow this mess up and rewrite it.

A Promise-based Approach

First, let's go back to the simple case of just one call to $.get() and attaching the success handler.

var model = function () {
    var self = this;

    self.setProperties = function (args) {
        // Do some sets
    };

    self.fetch = function () {
        // Fire off the request for the first bit of information
        // Because we attach this done() handler first, we can
        // be assured that it'll get executed before whomever 
        // may add to this function chain later.
        var fetching = $.get('...').done(self.setProperties);
        return fetching;
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch().done(refreshTheUi);

Way simpler.

Note that the model no longer cares about the consumer's callback at all!  The model's concerns are that of populating itself and nothing more--no callback function marshaling! All we need to do is return a promise that's resolved when the model wishes and let whomever gets it attach to it as they will.

In this scenario, our model says that fetch() is complete when $.get() is complete, so we can just return the promise that $.get() gives us straight away. But things aren't always quite so simple.

Running async operations in parallel using $.when()

Let's show how to make this a composite model with multiple $.get() calls just as before, using $.when().

var model = function () {
    var self = this;

    self.setPropertiesFromFetch1 = function (args) {
        // Do some sets
    };
    self.setPropertiesFromFetch2 = function (args) {
        // Do some more sets
    };

    self.fetch = function () {
        var fetching1 = $.get('...').done(self.setPropertiesFromFetch1);
        var fetching2 = $.get('...').done(self.setPropertiesFromFetch2);
        
        // $.when returns a **new** promise
        var fetchingBoth = $.when(fetching1, fetching2);
        return fetchingBoth;
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch().done(refreshTheUi);

$.when() is used when you want a new promise that is only done when both of its children are done (and is rejected if any child gets rejected).  Here are some notes about $.when():
  • $.when() returns a new promise that wraps its inner promises.
  • The promise that it returns is only resolved after all promises given to it are resolved and their done() callbacks have completed.
  • The promise that it returns is rejected if any child is rejected.  Its fail() callbacks are executed only after the inner promise's fail() callbacks are complete.  Other promises are unaffected.
  • You cannot pass an array to $.when().  If you want to do this, you'll need to use $.when.apply($, promiseArr).
  • If you give something that's null, undefined, a plain object or otherwise not a promise, $.when() will treat it as a Promise that's already resolved rather than throwing any kind of error.  So $.when(promise1, null, promise2) equates to $.when(promise1, promise2).  Keep this in mind, as it's burned me when trying to figure out why $.when() was calling its attached done() handlers too early.
Because the promise returned by $.when() doesn't have resolve() called on it until each of its inner promises have had their done() chains complete, we can be assured that when refreshTheUi() gets called all properties have been set via the setProperties() functions.  Awesome.

Note that as of jQuery 1.8 Deferred.Pipe is now an alias for Deferred.Then.  I am leaving the below as-written but in your code replace .pipe for .then if possible.  Recognizing than Deferred.then now serves a dual purpose (as shorthand and as pipe) is important to advanced promise implementations.

Going from parallel calls to chained calls is easy with Deferred.pipe()

$.when() is to "parallel" as Deferred.pipe() is to "serial/chained".  Until recently, the documentation for Deferred.pipe() was exceptionally unclear on its usage, though I'm happy to say it's been made a bit clearer as of this writing.  It talked about filtering (something I find to be a rare use case) and really glossed over the very important fact that Deferred.pipe() returns a new promise, just like $.when().

It's easiest for now to just think of Deferred.pipe()* as a chained alternative to $.when(), but its use slightly differs.

* Note how I called it "Deferred.pipe()"?  This is because it's called on the Promise you want to chain off of and doesn't exist on its own like $.when() does.

Let's put it to use.

var model = function () {
    var self = this;

    self.setPropertiesFromFetch1 = function (args) {
        // Do some sets
    };
    self.setPropertiesFromFetch2 = function (args) {
        // Do some more sets
    };

    self.fetch = function () {
        var fetching1 = $.get('...').done(self.setPropertiesFromFetch1);
        
        // Deferred.pipe also returns a **new** promise
        var fetchingBoth = fetching1.pipe(function () {
            // This part here is only executed once fetching1 returns successful
            // since the first parameter of pipe is the done callback
            var fetching2 = $.get('...').done(self.setPropertiesFromFetch2);

            // Returning fetching2 here will 'pipe' its results into fetchingBoth
            // Therefore, fetchingBoth is only successful when both fetching1
            // and fetching2 are successful.
            return fetching2;
        });
        return fetchingBoth;
    };
}

function refreshTheUI() {
    // Refresh UI with new data
}

var myModel = new model();
myModel.fetch().done(refreshTheUi);

Here are some notes to keep in mind when using Deferred.pipe():
  • It's called on the promise you want to chain from.
  • It returns a new promise.
  • The promise that pipe() returns' result (whether it's rejected() or resolved()) depends on that of the inner returned promise, if given.
  • Its method signature matches that of Deferred.then(), and although the documentation makes no mention of it as of this writing, as of jQuery 1.8, Deferred.then() is an alias for Deferred.pipe().  They are exactly the same, even though their stated purposes are quite different.

Getting even more complex is simple with Promises

Creating something with exceptionally complex async logic is super simple using Promises.  Remember that "serial then parallel" challenge I mentioned using callbacks?  Well it's as simple as this:

    self.fetch = function () {
        var fetchMeFirst = $.get('...');
        
        // fetchingEverything is a new promise that's successful
        // if the three $.get() calls complete successfully.
        var fetchingEverything = fetchMeFirst.pipe(function () {
            var fetching1 = $.get('...').done(self.setPropertiesFromFetch1);
            var fetching2 = $.get('...').done(self.setPropertiesFromFetch2);

            return $.when(fetching1, fetching2);
        });
        return fetchingEverything;
    };

Promises are freaking awesome.

Wednesday, September 12, 2012

I $.Promise - Part 1 - Why do jQuery Promises exist?

When I first learned about Promises, I couldn't rationalize their existence.  I simply didn't know what they were for.  Sure, I had been told how great they were, but I made due with callbacks and events and never saw a need for anything else.  After having been exposed to this brand new world, I don't know how I managed without them!

In this first post, as an introduction to jQuery Promises, I hope to answer the question as to why Promises exist and solve some very very basic problems traditionally solved by callback functions.

Building up to promises from traditional callbacks

So you've got this thing called a Promise.  Now what the heck is it?  Well, let's start by thinking about the success callback function of $.get().

$.get('...', function () {
    console.log("Data gotten!");
});

Okay, seems well and good.  But now what if you want to execute more than one success function? Maybe the success function is a function that calls two more success functions!  That'll be cool!

var firstSuccessFunction = function () { 
    console.log('Update thing A'); 
};
var secondSuccessFunction = function () { 
    console.log('Update thing B'); 
};

var executeSuccessFunctions = function () {
    firstSuccessFunction();
    secondSuccessFunction();
};

$.get('...', executeSuccessFunctions);

One major problem with this: The authority on what gets executed upon $.get() success is the executeSuccessFunctions function and not whomever actually called $.get()!

In order to take control away from executeSuccessFunctions (because it might not know what we want to do or exist in a different scope all together), we need to give an agnostic execute function to $.get() and let the caller construct what he wants to execute.

var successFunctions = [];
var firstSuccessFunction = function () { 
    console.log('Update thing A'); 
};
var secondSuccessFunction = function () { 
    console.log('Update thing B'); 
};

successFunctions.push(firstSuccessFunction);
successFunctions.push(secondSuccessFunction);

var executeSuccessFunctions = function () {
    for (var i = 0; i < successFunctions.length; i++) {
        successFunctions[i](); // Execute it
    }
}

$.get('...', executeSuccessFunctions);

More unsolved problems

  • Initial appearances might be that you can push a function onto successFunctions after the $.get() line and have it execute on success.  What this would really do is create a race condition: if the $.get() returns after you do this, no problem; however, if it returns before you push a new function onto the successFunctions stack, it'll never get executed.  Net result: we still need to define all of our success callbacks before we actually make our AJAX call.
  • We've written unnecessary plumbing code that'd need to get pasted all over the place for executing each function in an array or whatever other logic might exist.
  • We cannot easily accomodate all possible code paths.  What if the AJAX call itself fails?  Do I then need to construct a new errorFunctions array for storing those?  What if there's overlap and I want to execute firstSuccessFunction under both scenarios?  The pure plumbing required to hook all this up is ginormous.  

Well, Promises solve all of these problems and more via an entirely different convention.

Rather than giving an array of success callbacks to $.ajax() to execute when it's done doing its thing, it delegates that responsibility to a third party object and returns this object immediately to the caller.

Promise-based implementation

Here's our example again using Promises.

// Don't need to give a callback to $.get() as it 
// isn't going to execute our callbacks, on
// AJAX success it's going to tell the Promise
// it returned to execute **its** callbacks
// instead. This call to resolve() by $.ajax() 
// happens when the server responds.

var getting = $.get('...');

var firstSuccessFunction = function () { 
    console.log('Update thing A'); 
};
var secondSuccessFunction = function () { 
    console.log('Update thing B'); 
};

// analogous to successFunctions.push from before
getting.done(firstSuccessFunction); 
getting.done(secondSuccessFunction);


With that tiny bit of code, we've solved all of our stated problems with standard callbacks:
  • The caller of $.get() is the authority of what gets executed upon the success or failure of $.get().
  • We can define and attach a callback function at any time--before or after $.ajax() has returned and is resolved, even in an entirely new scope simply by handing off the Promise object.  Whether the Promise is resolved or not yet is irrelevant: if it is, the function passed to done() will execute immediately.  If it isn't, it'll execute later when the Promise resolved.
  • There's no plumbing code about how to execute things or what callback chain to execute.  That's handled by the Promise itself and the authority over the Promise (in our case $.get) respectively.
  • To create an entirely different chain of callbacks on failure, we just attach new handlers via .fail() instead.  If we want something to execute no matter what, that's when we use .always().

The code shown here is deliberately simple, but I hope when you look at it you do not merely see callback equivalents with a different syntax.  Promises are a whole new paradigm that will significantly change the way you structure your applications.

In short, Promises are awesome.