async.js and promises

A friend of mine linked Callbacks are imperative, promises are functional: Node’s biggest missed opportunity blog post, during our own discussion about async.js vs promises.

I agree, that async.js lies on the lower abstraction level than promises. But if I'd be the one writing node.js low-level APIs, I'll pick callbacks. They aren't pretty, but continuation-passing style is still functional.

Why would I prefer callbacks for low-level APIs? Because they are so damn simple. Everything is explicit and visible. On the other hand there are lots of details hidden under promise abstraction. Promises/A+ specification tries to reveal how they should work though. Yet for user land application code I might prefer promises as it's much easier to compose them. And functions like then/node.liftAll make it easy to wrap modules using callback interface.

Promise type

The type signature of fs.readFile is

readFile :: String -> (Either Error Buffer -> IO ()) -> IO ()

The promised version could be

readFile :: String -> Promise Buffer

Luckily, it's easy to make type synonyms in Haskell:

newtype Promise a = Promise ((Either String String -> IO ()) -> IO ())

or using the Continuation Monad

type Promise a = ContT () IO (Either String a)

Returning to the original post, it's unfair to compare with map & list. By the way, list is called q.all or when.all in Q and when.js respectively.

More fair is to compare when.all with async.parallel. Slightly modifying the fs.stat example:

var fs_stat = node.lift(fs.stat); // node = require("when/node");

var paths = ['file1.txt', 'file2.txt', 'file3.txt'];
var tmp =;
var statsPromises = when.all(tmp);

statsPromises.then(function(stats) {
  // use the stats

var fs_stat = _.curry(fs.stat, 2);

var paths = ['file1.txt', 'file2.txt', 'file3.txt'];
var tmp =;
var statsCallback = _.partial(async.parallel, tmp);

statsCallback(function (err, stats) {
  // use the stats



// helper to use with map

function oneparam(f) {
  return function (x) {
     return f(x);

As you can see, not so different. Source code layout is quite similar. The amount of boilerplate is same also.
However, what happens, and when is different.

Memoization of the results

James works thru a problem of avoiding hitting the file-system twice. You could reduce the problematic callback code and promised based solution into two snippets:

var callback = _.partial(fs.stat, "file1.txt"); // construct computation

callback(callbackify(console.log)); // run it and use the result

callback(callbackify(console.log)); // run it and use the result, again


var promise = node.lift(fs.stat)("file.txt"); // construct computation, and run it

promise.then(console.log); // use the result

promise.then(console.log); // use the result, again


function callbackify(fn) {
  return function (err, res) {
    if (!err) {

They look very similar, but their behaviour is different, as you can see from the comments.

Should we work in IO or Promise monad?

If we rewrite problematic callback code in Haskell, we will see the solution!

import Control.Monad.Trans.Cont

type Promise a = ContT () IO (Either String a)

-- We use this just to be clear

data Stats = Stats String
  deriving (Show)

stat :: String -> Promise Stats
stat path = ContT $ statImpl path

statImpl :: String -> (Either String Stats -> IO ()) -> IO ()
statImpl path callback = do
  -- cause a side-effect

  putStrLn $ "stat " ++ show path
  -- execute the callback

  callback $ Right (Stats path)

promise :: Promise Stats
promise = stat "file.txt"

main :: IO ()
main = do
  runContT promise print -- kind of promise.done(console.log)

  runContT promise print -- but we are running the computation twice

The output is

stat "file.txt"
Right (Stats "file.txt")
stat "file.txt"
Right (Stats "file.txt")

We run the computation twice. We can rewrite the code, to have only one runContT:

promisePrint :: (Show a) => a -> Promise ()
promisePrint x = liftIO (Right `fmap` print x)

work :: Promise Stats -> Promise ()
work pstats = do
  s <- pstats
  promisePrint s
  promisePrint s
  return $ Right ()

main :: IO ()
main = runContT (work promise) print 

The output is

stat "file.txt"
Right (Stats "file.txt")
Right (Stats "file.txt")
Right ()

Similarly, the JavaScript example in the original post

var paths = ['file1.txt', 'file2.txt', 'file3.txt'];, fs.stat, function(error, results) {
  // use the results


fs.stat(paths[0], function(error, stat) {
  // use stat.size


could be rewritten to:

var paths = ['file1.txt', 'file2.txt', 'file3.txt'];

function work(error, results) {
  // use the results


  // use only the first result


// start the execution, fs.stat, work);


I hope that the examples gave you some insights about promises and raw callbacks. As you see, using bare callbacks isn't that hard, if you know what you are doing.

And as mentioned in the introduction section, I'd prefer using callbacks for libraries' low-level API. They add no overhead, and you have all control about the execution. Also users of your library could select their promise library freely (without adding casting overhead).

For the application code, like the stat examples, promises are better. You could avoid callback hell without promises. Just name your functions and avoiding nested callbacks. But promises give you nice (looking) abstraction, which is also easier to use, not forgetting nuances like always dispatching then callbacks asynchronously.

TDD with QuickCheck

I read Nat Pryce's - Exploring Test-Driven Development with QuickCheck about his journey to implement Tic-Tac-Toe using Test-Driven Development, TDD as in Keith Braithwaite exercise.

The first thought that popped into my mind:

TDD makes you think hard about your code, property-based TDD makes you think even harder.

I'll try to do this exercise myself, though I take an easier way: with Haskell.

First Keith's rule is: Write exactly one new test, the smallest test you can that seems to point in the direction of a solution. And this is hard. Nat took quite a big step, thinking already about a winner in a beginning. We don't get even close, in this quite lengthy post.

Going red

Returning back to the Tic-Tac-Toe: What are the types of data and operation we can start from? We write our types first.

data Player = X | O
  deriving (Eq, Show, Bounded, Enum)
data Fin3 = I1 | I2 | I3
  deriving (Eq, Show, Bounded, Enum)
newtype Position = Position { unPosition :: (Fin3, Fin3) }
  deriving (Eq, Show)
data Move = Move { movePosition :: Position, movePlayer :: Player }
  deriving (Eq, Show)
newtype Game = Game { unGame :: [Move] }
  deriving (Eq, Show)

That feels like a lot of types already, but I can't state any interesting properties about moves, without tying them to the game (giving them order).

Than the operation simplest operation (actually I cheat here, I have already sub-goal in my mind: I want eventually define isValidMove :: Game -> Move -> Bool predicate).

isFreePosition :: Game -> Position -> Bool
isFreePosition g p = undefined

Then what will be our first property? maybe: forall g :: Game, exists p :: Position, isFreePosition g p. This one isn't true for all games. Counter-example is the game with 9 moves. We refine our property to: forall g :: Game, length . unGame g < 9, exists p :: Position, isFreePosition g p. This property is problematic as well. First of all, it's existential. We can turn it around into:
forall g :: Game, length . unGame g == 9, forall p :: Position, isFreePosition g p = False.
This property we can QuickCheck:

import TicTacToe

import Control.Monad
import Test.QuickCheck

instance Arbitrary Player where
  arbitrary = arbitraryBoundedEnum
  shrink X = []
  shrink O = [X]

instance Arbitrary Fin3 where
  arbitrary = arbitraryBoundedEnum
  shrink I1 = []
  shrink I2 = [I1]
  shrink I3 = [I1, I2]

instance Arbitrary Position where
  arbitrary = Position `liftM` arbitrary
  shrink = map Position . shrink . unPosition

instance Arbitrary Move where
  arbitrary = liftM2 Move arbitrary arbitrary
  shrink (Move pos player) = [Move pos' player | pos' <- shrink pos] ++
                             [Move pos player' | player' <- shrink player]

instance Arbitrary Game where
  arbitrary = Game `liftM` arbitrary
  shrink (Game ms) = map Game $ shrink ms

-- To make generation rate better

newtype Game9 = Game9 { unGame9 :: Game }
  deriving (Eq, Show)

instance Arbitrary Game9 where
  arbitrary = (Game9 . Game) `liftM` replicateM 9 arbitrary

-- Generator property

game9_prop :: Game9 -> Bool
game9_prop = (9==) . length . unGame . unGame9

> quickCheck game9_prop 
+++ OK, passed 100 tests.

-- forall g :: Game, length . ungame g >= 9, forall p :: Position, isFreePosition g p = False

game9_no_free_positions_prop :: Game9 -> Position -> Bool
game9_no_free_positions_prop g' p = not $ isFreePosition g p
  where g = unGame9 g'

> quickCheck game9_no_free_positions_prop
*** Failed! Exception: 'Prelude.undefined' (after 1 test and 21 shrinks):

As expected QuickCheck found undefined after one test (as isFreePosition is always undefined for now).
Now we try to turn red into green, let's implement isFreePosition properly.

Turning green

isFreePosition :: Game -> Position -> Bool
isFreePosition g p = False

> quickCheck game9_no_free_positions_prop
+++ OK, passed 100 tests.

You can argue this implementation isn't proper, but all tests are green!

New tests, red again

So we need more tests (properties), every position is free in just started game: forall p :: Position, isFreePosition (Game []) p.

empty_game_all_positions_free_prop :: Position -> Bool
empty_game_all_positions_free_prop p = isFreePosition (Game []) p

> quickCheck empty_game_all_positions_free_prop 
*** Failed! Falsifiable (after 1 test and 2 shrinks):  
Position {unPosition = (I1,I1)}

Turning green II

Again red. Let's fix isFreePosition:

isFreePosition :: Game -> Position -> Bool
isFreePosition g p = p `notElem` map movePosition (unGame g)

> quickCheck empty_game_all_positions_free_prop 
+++ OK, passed 100 tests.

> quickCheck game9_no_free_positions_prop 
*** Failed! Falsifiable (after 5 tests and 1 shrink):  

What's wrong? We can implement shrink for Game9, which hopefully will make counterexample more descriptive:

instance Arbitrary Game9 where
  arbitrary = (Game9 . Game) `liftM` replicateM 9 arbitrary
  shrink (Game9 (Game ms)) = do ms' <- mapM f ms
                                guard $ ms' /= ms
                                return $ Game9 $ Game ms'
                             where f x = x : shrink x

> quickCheck game9_no_free_positions_prop 
*** Failed! Falsifiable (after 7 tests and 16 shrinks):    
Game9 {unGame9 = Game {unGame = [Move {movePosition = Position {unPosition = (I1,I1)}, movePlayer = X},Move {movePosition = Position {unPosition = (I1,I1)}, movePlayer = X}...
Position {unPosition = (I2,I1)}

It's easy to see that all moves in the game are same! But that doesn't make sense! And that's our fault.

Turning green, try III.

Our game9_no_free_positions_prop property is bogus. We could restate it into something more precise: forall g :: Game, length . ungame g >= 9, isValidGame g, forall p :: Position, isFreePosition g p = False, where isValidGame is something like:

isValidGame :: Game -> Bool
isValidGame (Game []) = True
isValidGame (Game (m:ms)) = isValidMove g m && isValidGame g
  where g = Game ms

isValidMove :: Game -> Move -> Bool
isValidMove g m = isFreePosition g (movePosition m)

But we are red, so we don't touch properties.

isValidGame is an Game invariant we want to hold always. If we hide Game implementation details, and expose only invariant preserving operations, forall g :: Game, isValidGame g will be true (and we can state this property!). Unfortunately we have to change our tests a little bit, as we give things name, like moveCount = length . unGame. That we could do right from beginning.

Next we change TicTacToe modules. The properties doesn't change much. You can find edited files at TicTacToe.hs, TicTacToeTests.hs and and the coverage markup report.

I use assert in addMove functions. Why?
Because I cannot state the pre-condition in types. The situation is similar to division by zero.

% ghc -fhpc TicTacToeTests.hs --make                  
[1 of 2] Compiling TicTacToe        ( TicTacToe.hs, TicTacToe.o )
[2 of 2] Compiling Main             ( TicTacToeTests.hs, TicTacToeTests.o )
Linking TicTacToeTests ...
% ./TicTacToeTests                                     
+++ OK, passed 300 tests.
+++ OK, passed 300 tests.
+++ OK, passed 300 tests.
+++ OK, passed 300 tests.
% hpc report TicTacToeTests --exclude=Main --exclude=QC 
 95% expressions used (40/42)
100% boolean coverage (0/0)
     100% guards (0/0)
     100% 'if' conditions (0/0)
     100% qualifiers (0/0)
100% alternatives used (2/2)
100% local declarations used (1/1)
 30% top-level declarations used (13/42)

Looks pretty good already, except there are unused unPosition and movePlayer function. I thought to much up front, we could get here without Player type at all. However natural next step would be to write code that will eventually use movePlayer. Like defining isNextPlayer :: Game -> Player -> Bool predicate and going thru similar steps we already went thru with isFreePosition. I leave that as an exercise for a reader.


At some point, I got insight, that properties aren't specific to Tic-Tac-Toe game. The only Tic-Tac-Toe specific part is magic constant 9. It is 64 for Reversi, 361 for 19×19 board of Go or Gomoku. So we came up with some general board game properties!

Static vs Dynamic

Internet is full with discussions about "Statically vs. Dynamically typed languages" (some of them: 1 2 3), and I want to add my own drop to the ocean.

When code is written, we want it to be correct, bug-free. Also if it is written quickly, that is a bonus. Both goals are hard to achieve simultaneously.

To make program correct we use many different techniques. If we are working on some algorithm, we might sketch it on the paper (alone or with a co-worker), go through some simple examples to see how our idea works. Than those examples turn into unit tests, and we might have some more generic tests, derived from "specifications". After the code is written, we give it to a teammate for review. We sometimes even write documentation, to explain ideas behind the lines of code. Also we add some logging here and there, so if in the future we encounter a problem, it's easier to find and understand.

With statically typed languages you start defining your data types and writing function type signatures (I'm thinking about how I write in Haskell). And then let types guide you through development.
With dynamically typed languages you start defining your data types, think about function type signatures, maybe write them down in the comments...

IMHO, in bigger projects statically typed language helps us to reduce silly errors.
We use static tools in dynamic languages, eg jshint for JavaScript, it catches silly variable name miss-spellings etc. Why not use static type-checker to help us catch more errors? The heart of the problem is that type systems of what-you-can-use-for-a-production-languages are often too restrictive.

For example we can do everything in simplest functional language, untyped lambda calculus, it's Turing complete and dynamic. But if we add the simplest type system to the language turning it into simply typed lambda calculus (abbreviated STLC), we lose Turing completeness, we can't type-check even basic a iteration, we need to either add special fix operator or recursive types with folds (read an introduction of this thesis) or catamorphisms (that's the fancy name).

Still STLC without polymorphic types is very restrictive. Even simple identity function

cannot be type checked, we must give some hard type to the argument (and the result):

This might feel really stupid, but think about [C]( (without macros).

We can extend our type-system further, adding polymorphic types, just a bit ending to Hindley-Milner type system, or to the corner of lambda cube getting System F. Than we can write our useful identity function:

Hindley-Milner is in the heart of Haskell and ML, but we can extend the type systems. For example RankNTypes makes Haskell type system more like System F. There are also many others type-system extensions in GHC. And there are even some dependent type stuff for Haskell, but there are better alternatives if you want do that, like Agda or Idris.

Let's go through example. Now we have two extremes, in dynamic language, say JavaScript:

function first(arr) {
  return arr[0];

And in type dependent, static Coq (I prefer tactic based definition, because I seldom get it right with refine):

Definition first (A : Type) (l : list A) (H: l <> nil) : A.
  destruct l as [|x rest]. 
  (* l == nil *)
  exfalso. auto.
  (* l == x :: rest). *)
  exact x. Defined.

Haskell is somewhere in between:

first :: [a] -> a
first (x:_) = x

Coq's version will not permit you to apply first on a list, without a proof that the list is non-empty. Haskell will fail at run-time (with Non-exhaustive patterns in function first). We can make the error message better:

first :: [a] -> a
first (x:_) = x
first _     = error "`first` called with non-empty list"

BTW. Coq extraction mechanism is clever enough and will give similar Haskell output, with

Extraction Language Haskell.
Extraction Inline False_rect.
Extract Inductive list => "[]" [ "[]"  "(:)" ].
Extraction first.

we get

first :: ([] a1) -> a1
first l =
  case l of {
   [] -> Prelude.error "absurd case";
   (:) x rest -> x}

What will happen in JavaScript? If we pass an empty array, we get undefined back. We also get undefined, if we evaluate first(1). That behavior might be ok, or maybe we want to raise an error there. After adding explicit checks the code aren't elegant anymore:

function assert(exp, msg) {
  if (!exp) {
    throw new Error(msg);

function first(arr) {
  assert(Array.isArray(arr), "arr must be an array");
  assert(arr.length !== 0, "arr must be non-empty array");
  return arr[0];

Dynamic languages are different, for example Python version

def first(l):
  return l[0]

will raise error if l is an empty list, or l doesn't behave like a list (effect of duck-typing).

In essence, the problem is that, if it walks like a duck and quacks like a duck, it could be a dragon doing a duck impersonation. One may not always want to let dragons into a pond, even if they can impersonate a duck.

Proponents of duck typing, such as Guido van Rossum, argue that the issue is handled by testing, and the necessary knowledge of the codebase required to maintain it.

And the latter one, necessary knowledge, is very hard to maintain or gather. Especially in large projects. Types can help you. Also you still want to do some asserts (even in Python, there is assert statement!) to make reasons of failing tests more clear.

In conclusion, with very sophisticated type system, you can express many contracts (pre- and post-conditions) using types. But convincing the type-checker may be tedious and time-consuming, so you may go for a run-time check. I would like softly typed language (I would prefer term quasistatic, like quasistatic processes in thermodynamics).

The key idea underlying soft typing is that a static type checker need not reject programs that contain potential type errors. Instead, the type checker can insert explicit run-time checks around “suspect” arguments of primitive operations, converting dynamically typed programs into statically type-correct form.

I have also written little JavaScript library to help with most common run-time checks in JavaScript: typify. It's not done, but already usable.