From what I’m hearing and reading, more and more developers are working like this, but it’s still not a particularly well-understood way of doing things. The usual way it’s done is that your editor and tools live on your host machine, you interact with the project code on the guest machine using a shared folder, and you run tests and other development processes on the guest machine via ssh.
Sounds pretty straightforward, but there are some subtle problems. One is filesystem events, which don’t work as they normally would.
In most modern operating systems, there’s some kind of mechanism for a process to receive notifications when files change. In Linux it’s inotify, for example. These mechanisms allow processes to watch a set of files for changes without polling the filesystem (i.e. repeatedly checking the modification times of the watched files). Because they work natively, and without polling, they’re very fast.
In the standard Vagrant development setup, though, you’re making changes to the files on the host machine, but running the watching processes in the guest machine. And unfortunately that means that mechanisms like inotify
don’t work.
If you use Guard, for example, to run tests in a Ruby app when files change, it uses Listen to watch the filesystem. Listen ships with adaptors for many different notification systems, including a fallback polling adaptor. Until quite recently, the only way to use Guard inside a Vagrant VM was to use the polling adaptor – which is very slow, and very resource-intensive. Polling the files in a decent-sized Rails app at an interval of 1 second will most likely pin the CPU of the guest machine; also, in my experience it just wasn’t reliable (changes often wouldn’t seem to be noticed, or would be noticed late).
If you’re using something like guard-rspec to do continuous TDD, for instance, then having to repeatedly nudge the filesystem to pick up changes, and wait several seconds for them to be picked up, becomes, well, painful. There’s a way round this, though: Listen and Guard provide a way to listen to filesystem events in one machine and forward the events to another machine over the network. I won’t describe this in detail, because it’s been done elsewhere.
There are a couple of niggling inconveniences with this solution, though. Firstly, it’s just cumbersome: You need to start a listen
process on your host machine, then start a guard
process on your guest machine, and then remember to shut them both down when you’re done. In a traditional setup you just run guard
and away you go.
Secondly, the guard
process needs to know the full path to the watched directory on the host machine, which means it’s hard to make the setup portable (it’s a near certainty that the path will be different for every developer on the project).
Enter vagrant-triggers, which lets you do arbitrary stuff around the lifecycle events of a Vagrant VM. We can use this to start and stop listen
on the host machine for us, which solves issue one. And we can set up some environment variables inside the guest machine to solve problem two. Let’s do that first.
1 2 3 4 5 6 7 |
|
That creates the environment variables HOST_ROOT
and LISTEN_PORT
in the guest machine, and forwards LISTEN_PORT
to the guest. Next we create a couple of simple functions in the Vagrantfile:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
start_listen_script
starts listen
and forwards change notifications to LISTEN_PORT
; because we’ve forwarded that port to the guest machine, the guest machine will receive the notifications.
stop_listen_script
checks for a running process in the host machine which matches the listen
executable and arguments, and if it finds one, kills it. We need to do this so that Vagrant can run its lifecycle operations correctly, and so we don’t end up with lots of orphan listen
processes.
Now we’re almost ready to create some triggers, but we need to make some additional gems available to Vagrant. Run the following in your host machine:
1
|
|
celluloid-io and thor are necessary for listen
to work correctly when started as part of Vagrant’s lifecycle (interesting to note here that vagrant plugin install
is just gem install
in disguise – it can make arbitrary gems available to Vagrant).
Next we need to make sure listen
is available on our host machine:
1
|
|
And finally install vagrant-triggers
:
1
|
|
Now we can create the following triggers in our Vagrantfile:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
That will start listen
on the host when we bring our VM up, and stop listen
when we take it down or cycle it. All we need now is a way to properly run guard
inside the guest machine, picking up the correct watch directory and ports. We can do that in our project’s Rakefile:
1 2 3 4 5 6 7 |
|
That works thanks to the environment variables we pushed into the guest machine earlier. Note that I hardcode the IPs to 127.0.0.1
on the host and 10.0.2.2
on the guest, because they’re the defaults – you can change them, or make them configurable if you want. Now we can run Guard in the guest machine like so:
1
|
|
Much better.
We don’t only develop Rails apps though. We also have a burgeoning front-end estate, and for that we use npm, Browserify and Karma, among other things. These all present their own issues. Firstly, as far as file-watching goes, we’re stuck with polling on the front end. To my knowledge, none of the JS filewatching solutions provide anything out-of-the-box like Listen’s network forwarding, so if you’re using Watchify to run incremental Browserify builds,
make sure to pass the poll
option (see the Watchify docs). Continuous testing with Karma defaults to polling automatically, as does live reloading with Browsersync. There is one big headache remaining though.
When we started using npm and Browserify to build our front-end projects, we were, ah, dismayed by how long it took to run a complete npm install
. The turnaround could be minutes – sometimes double figures of minutes – which made any change in dependencies agonising. To boot, it quite often hung or failed entirely. We entertained a few potential solutions (running a local caching server, adjusting the npm cache settings e.g.) before we noticed something odd.
A new front-end developer we’d taken on wasn’t using Vagrant, and had resisted switching to it. It turned out that his resistance was owing to how long it took for npm install
runs to complete. Because on his host machine, they were fast. Where our runs would take 7 minutes, his took 40 seconds. So it was immediately apparent that the problem wasn’t just npm – it was Vagrant, too (or to be more accurate, VirtualBox).
We did a bit of research into what the problem could be, and it occurred to me that some time ago when I’d been trying to get Guard to work, I’d read about using nfs rather than the default VirtualBox filesystem to share folders between host and guest. Using nfs had caused more problems than it seemed to solve, so I gave up, but I recalled during that research I’d read some Vagrant users suggesting that the VirtualBox filesystem could be slow for certain operations. So we tried nfs again. Bam: 40-second npm runs.
It turns out that VirtualBox Shared Folders (vboxsf
), the default filesystem when using VirtualBox with Vagrant, is extremely slow for almost all operations (see e.g. http://mitchellh.com/comparing-filesystem-performance-in-virtual-machines). With a tool like npm, which in the course of an install reads and writes thousands of files, this is disastrous. We’d never noticed the issue in our Rails apps, using Bundler, but npm’s architecture (which installs a full copy of every subdependency in the tree), combined with the Javascript fashion for lots of very small modules, was enough to bring the deficiencies of vboxsf
to a very noticeable light.
Just switching to nfs, though, wasn’t enough to solve all our problems. When I’d used it before, I’d had issues with unwanted caching (files not appearing to change in the guest machine when changed on the host). So we had to do a bit more research to figure out how to tweak the nfs setup to suit. This is what we ended up with:
1 2 3 4 |
|
Note the mount_options
parameter: this passes the given array as arguments to the command Vagrant uses to start nfs. Here’s what they do:
nolock
prevents the guest and host filesystems from sharing file locking information. It’s a workaround to allow the use of older nfs servers – we found it necessary to enable sharing between our Ubuntu 12 guests and OS X hosts.vers=3
forces the version of nfs server to use. Again we found this necessary, but you may not.udp
forces the use of udp rather than tcp, which we found gave a performance boost.noatime
prevents the modification of the last-updated timestamp on files, which again gives a performance boost.actimeo=1
sets the caching timeout to one second, which fixed the issues we were having with unwanted caching.When the Vagrant machine is started or cycled, you may be asked for your password, because Vagrant has to modify the /etc/exports
file on your host system to enable the share. Otherwise, this setup works well for us – we get fast npm runs and file watches that don’t completely pin the guest cpu.
This way of doing dev work is still fairly immature, and we’ve had to find our own solutions to the problems it poses. There are still things that don’t work – something like Robe for example, which can run a Ruby process and use it to provide code completion and navigation, has so far been too difficult to get working across the host/guest boundary.
That’s a nice-to-have though; the benefits of working this way make it more than worthwhile to work on solutions to the problems.
]]>I’ve had to dip into Windows before (having worked a job where the server-side code was .Net), but I’ve never had to use it as a primary dev environment. Long story short: it is extremely painful to use Windows for modern web development, or any kind of development which is not explicitly for Windows.
Here is a statement of the main problems, and the measures which can get you at least close to a solution.
An odd one to start with perhaps, but it has a significant impact on the rest of the setup. There are two ways you can get a usable Git setup working on Windows: one is to use Cygwin, of which more below, and one is Git for Windows, otherwise known as MSYS-Git. You can download a lightweight version of MSYS-Git as Git Bash – which, you shouldn’t be surprised to hear, comes with an implementation of Bash – but I’m going to advise getting the full installer. It should become clear why.
MSYS-Git is quite slow; apparently the version that you can use with Cygwin is faster, but Cygwin was too much of an overhead for my purposes.
Where to begin. The Windows command line is a disaster, unless all you need to do is change directories and open files. No grep. No vi. No sed. The knock-on effects, on editors and other tools which rely on these utilities, are not fun. I’d been recommended clink, which offers Bash-like completion, but it’s still … the Windows command line.
The options, if you want Bash or something like it, are Cygwin, effectively a complete replacement for the Windows command system, GoW (Gnu on Windows), a lighter-weight port of a set of Gnu utilities, or MSYS, a package similar to GoW, intended for compiling Gnu utilities on Windows. An outside choice is Eshell if you’re an Emacs user; I tried it, but found it didn’t fit well with my workflow.
Cygwin is a very comprehensive solution: it presumes you’re basically going to exist within it, which really didn’t work for me. GoW was nice, but didn’t play well with the Git-integrated Bash (see above), and neither did a separate install of MSYS. For these reasons, I suggest installing the complete MSYS-Git environment I mentioned above: you get Git, Bash and a decent subset of the Gnu utilities, all well-integrated.
That this is not a solved problem should give you some idea of The State of Things. There are two contenders, neither of which really represents an resounding victory: ConEmu and Console2. (Incidentally, are you noticing how much of this stuff is on Google Code? And SourceForge for heaven’s sake? The mind boggles). ConEmu is highly configurable. I do not need my terminal emulator to be highly configurable. Console2 is marginally easier to use, but pug ugly. I went for Console2.
The only real game in town here is Chocolatey. Like most community projects in the Windows world, it’s undermaintained and generally a bit thin, but at least it’s there. (And no disrespect to the maintainers here: it’s numbers I’m complaining about, not individual effort). Installs largely work; uninstalls or upgrades often don’t. But until Winbrew is reliable, there we are. I managed to get ack installed via Chocolatey, so two cheers for it, at least; all the packages mentioned above are available too.
I’m talking about the likes of nvm and chruby here. I used Nodist (because it was available on Chocolatey and nvmw wasn’t); there’s also pik for Ruby, but I couldn’t get it working.
Call me frivolous, but I have a background in typography and I cannot stand the look of type on Windows. The fault lies with the anti-aliasing, which only works in the vertical plane; type looks spindly and horrible, no matter how you set it up. If you’re used to modern-looking type, as rendered on OS X or Ubuntu, and you have any sort of sensitivity to these things, you just can’t live with it.
It’s not a total solution, but MacType saved me clawing my eyes out, at least.
Attempting web development on Windows feels like being thrown back in time five years or so. I’ve touched on the main pain points, but I can’t even remember how many, many times I had to ransack Stack Overflow to find out why such-and-such a gem wouldn’t work properly, or dig about in the awful Registry. How many times I restarted the machine to be told that Windows had decided to install 127 updates. How long and often I watched the spinny blue circle while a bleedin’ Explorer window opened. Ubuntu, in a VM, was considerably faster and more usable in every way.
The larger problem is that the whole experience of working on Windows is so rotten that nobody wants to fix this stuff. Plenty of GitHub issues I followed, on large, respectable repos, were unresolved. ‘Windows,’ said the maintainers. Won’t fix. Closed.
I really can’t blame them, and similarly, I can see why the whole open-source movement is so backward on the Windows platform; hardly anyone’s there, and community is what keeps all this going. I hope, for the sake of the people who have to use it, that it gets better. I won’t be hanging around to see, though.
]]>So, top five.
An indisputable number one. Doesn’t matter whether you’re a Rubyist – you’ll get something from this. It’s very well produced, the regulars are well-practised, and the guests are frequently stellar (Kent Beck, e.g.). Despite (or perhaps because of) the Ruby focus, the topics tend to be relevant to software engineering in general, especially the reading list episodes: Patterns of Application Architecture, Smalltalk Best Practice Patterns, Growing Object Oriented Software … it’s the canon, not just the Ruby stuff. Highly recommended.
An overview of what’s happening in open source. Very on the pulse. Great guests, good production values and educated hosts (although, in the nicest possible way, Adam Stacoviak would benefit the show by doing more listening and less talking).
I’m paying less attention to Javascript these days, but I still subscribe to and enjoy this podcast. Same stable as Ruby Rogues, and same presenter (the very personable Charles Max Wood); similarly high production values, too. Good guests and high-level chat.
It’s interesting to get a bit out of your comfort zone. Erlang people talk about different stuff. The show is patterned after the Ruby Rogues/Javascript Jabber format, but the nature of the community is so different that it’s not really comparable; production values are lower, but the guests, and the general tone, are much more oriented toward serious systems engineering, which makes a refreshing listen, for me.
Somewhat few and far between (it’s hard to tell if they’re even still going), these are what you’d expect from the PragProg stable: serious, somewhat conservative, but very valuable. At their best they afford a very high-level view of the practice of software development.
Honourable mentions: NodeUp, Git Minutes, Food Fight.
Don’t even talk to me about the Dev Show.
]]>So from my last post you might guess I’m an advocate for JavaScript. Well, I have more time for the language than many Rubyists seem to: it has a nicely compact API and, once you’ve learned to avoid the potholes, it’s an easy language to get things done in. CoffeeScript makes it considerably less painful to read and write, and the Node ecosystem is generating some very useful toolsets and ways of working.
There are, though, some significant pain-points involved in writing non-trivial programs in JavaScript. Some of them are well-covered in the literature already, but there are some that seem not to be. I’m interested to gauge the extent to which others have come up against some of these, so here’s my take on what’s really wrong with JavaScript.
I’m not going to dwell on any of this, because it’s well-trodden territory,
but to get it out of the way: the scoping is wacky (and the var
trap
is a disaster); the comparison operators (==
vs ===
) are a kludge;
and the treatment of the this
keyword is tiresome. There are many
more dark corners (see, for instance, Gary Bernhardt’s WAT), but all
of these issues can be worked around more or less easily (and in fact
CoffeeScript largely does so). There are other issues with the
language that I’d argue are harder to deal with – in some cases,
practically impossible.
So, famously, JavaScript is not a class-based language. It uses prototypal inheritance. This is a somewhat simpler type of object-oriented language design, whereby object instances inherit both data and behaviour from other object instances. It can work quite well (it’s a very memory-efficient system if used properly), and it can to some extent be used to mimic class-based inheritance if that’s what you want. The problem with JavaScript, though, is what it uses for ‘objects’.
In JavaScript, an ‘object’ is effectively just a key–value structure. It is what in other languages is called a hash, a map or a dictionary. It looks like this:
1 2 3 4 5 6 7 8 9 |
|
We could either use this object as-is, in which case it’s what in a class-based language we’d call a Singleton, or we can use it as the prototype for another object instance (there are several ways to accomplish this – they’re covered elsewhere, so I won’t expand).
As you can see, there’s no distinction between a ‘method’ (a member of the object which is a function) and a ‘property’. Everything is just a value indexed on a key. Nice and simple, but take a CoffeeScript instance method like this one:
1 2 |
|
This looks pretty and idiomatic: it’s an accessor method which
assigns @thing
if it doesn’t already exist, then returns it.
See the problem?
CoffeeScript’s syntax is helping to hide the fact that we’ve bound the
instance variable to the same key as the method (this.thing
). I hope
this seems obvious to you, and that you’re wondering how anyone could
be so stupid as to do such a thing. I am here to tell you that this is
something that a busy developer will do (especially one familiar with
Ruby), and that it will then manifest a very subtle bug which does not
immediately show up in unit tests. I will leave you to speculate as to
how I can be so sure about this.
Now, sure, we could be sensible and create setThing
and getThing
methods, but, you know, we’re not writing Java. We’re in a highly
dynamic language. We want nice things. Or, we could create setters and
getters using the
new ES5 syntax
– but that’s not available everywhere (pre-IE9, for instance), and, I
would argue, is too unwieldy to apply as a rule.
So what we’re left with is doing something like:
1 2 |
|
Which is … irritating. But if we want this kind of method syntax, that’s what we have to do, and we have to be vigilant that we don’t introduce mistakes like the above, because no lint tool will catch them.
It’s tempting to presume that this is a ‘feature’ of prototypal inheritance; it’s not. The two next best-known prototypal languages, Self and Lua, distinguish properties and methods just fine.
And that’s a good thing, right?
Well, first, it’s not really true of JavaScript: you need to ‘box’ primitives like numbers to get them to behave like objects, but fair enough, many languages do something similar. Second, and more importantly, see above as regards what objects in JavaScript actually are. What we’re really saying is ‘everything is a key–value structure’. Is that a good thing?
No. It isn’t. Imagine another hypothetical CoffeeScript method:
1 2 3 4 |
|
Spot the deliberate mistake? Busy developer has initialised copy
to
an array rather than an object. The really bad news? This method will
work almost as if it were correct. We can get every key of
inputObject
on copy
correctly, because an array, in JavaScript, is
an object, and therefore is just a key–value structure. We can set
(pretty much) any key on it we like, and get it back intact. But
again, subtle bugs abound; arrays are not intended to be used like
this, and things will go wrong if, for example, you start trying to
iterate over copy
’s properties, expecting it to be a ‘real’ object.
Ouch.
Try this, in a CoffeeScript class:
1 2 3 4 5 6 7 8 |
|
So presuming bindClick
has been called, what happens when we click
on "#clickable"
?
respondtoClick
will never be called, and you’ll almost certainly get
an exception. Remember, there are no methods in Javascript, only
functions. We bound the function this.onClick
to the click event;
that function has no idea that it’s also a method. Why should a value
in a key–value structure know which structure it belongs to? In the
context of the function this.onClick
, this
is the function itself
or, sometimes, the window
object. Neither of these has a
respondToClick
method, so we get an exception.
In short, we can call any function in scope, from anywhere, regardless of
whether it’s part of an ‘object’; and attaching a function to an object as
a method only binds this
correctly when the function is called in
the context of its parent object.
The correct way to do the above is of course:
1 2 |
|
We pass an anonymous function to .on
, explicitly binding this
to
our current context using CoffeeScript’s =>
operator.
All of the above can be worked around or avoided; at
present, one very important thing can’t be. JavaScript has no
equivalent to Ruby’s method_missing
, Smalltalk’s
doesNotUnderstand
, Objective-C’s forwardInvocation
, PHP’s __call
,
etcetera. You can’t catch a call to an undefined method in JavaScript
and then do something with it. This is, again, to do with the fact
that there’s nothing to distinguish ‘properties’ from ‘methods’ in
JavaScript; plenty of ‘properties’ won’t be callable as functions, but
that doesn’t necessarily mean they should be.
It’s important to be able to do this in traditional object-oriented
design; we need it to do proper delegation, for example. It’s also a
tremendously convenient feature if you want to do metaprogramming
(e.g. adding properties or methods to an object dynamically at
runtime). There are two ways to sort-of do it right now: one is
Mozilla’s __noSuchMethod__
, which is only supported in SpiderMonkey
and likely to be deprecated; the other is the ‘official’ ECMAScript
proposal, Proxy, which is characteristically circuitous and
counterintuitive. It can be turned on in V8 (and thus in Node), but
you’d be pretty far out on a limb to use it.
JavaScript was designed in a hurry, and it wasn’t designed to build large, long-lived, maintainable applications. The fact that people are using it to do so is a testament to the ingenuity of serious JavaScript users, and their insight that what JavaScript does offer makes the failings tolerable.
They are significant failings, though. I’ve yet to hear anyone convincingly argue that JavaScript’s version of OOP offers anything in addition to or distinction from more traditional versions; it’s a kind of Heisenbergian OOP, which breaks down as soon as you stop pretending it’s there. Similarly, I’m not convinced that the changes being made (and proposed) are the right ones. They make the language seem more complex – and this is a language whose simplicity is its virtue.
]]>It made me realise that somewhere in my programming brain I was differentiating between ‘old world’ and ‘new world’ syntactic features, and the more I thought, the more interesting the thought was.
So the reason I am writing here is to try to examine what it was about CoffeeScript that prompted the thought, by presenting some syntactic features that, to me, feel ‘modern’. I’ll also try to explain why I think each feature belongs in a modern language.
The obvious number 1. CoffeeScript’s lambda syntax uses the form
(arguments) -> body
. The style is probably derived from
Haskell;
Erlang
also uses it, and it’s more recently been adopted by
Scala,
C# and (in a
slightly backwards form)
Ruby.
What this says about language evolution is obvious, I think: programmers want to use anonymous functions. They want to create them quickly and easily, with a meaningful and noise-free syntax. I’d suggest that this is evidence of the current popularity of functional programming style, which is probably a result of the emerging necessity to program for concurrency. I’d also venture that most programmers using this style are interested less in concurrency (at the hardware level) than in a convenient syntax for working with event-driven systems (as well as a language-level Strategy pattern).
CoffeeScript makes parentheses optional in many cases (as does Ruby);
it also, in certain cases, makes braces for declaring object literals optional; commas
separating object key-value pairs and array elements can be replaced
with newlines; semicolons as line delimiters are not used; comparison and
logical operators can in many cases be replaced with equivalent
English words (===
with is
, !==
with isnt
, &&
with and
,
||
with or
, etc.).
To the same effect, CoffeeScript, like Ruby, offers flexibility in
conditional expressions, so that unless
can be used in place of if
not
, and any conditional operator can be used postfix, allowing
bon mots like return unless x is 5
.
This makes the language easy to read. It reads fluently, uninterrupted by messy, alienating punctuation. Sometimes this visual clarity is bought at the expense of semantic clarity, though, and the CoffeeScript compiler can bite you if you’re too laissez-faire. Also, how to ‘phrase’ your code becomes an additional, and possibly unwelcome, decision. Rubyists have long faced these issues, and adherence to the language’s idioms is now nearly as important as correct syntax.
The movement to greater flexibility and fewer sigils seems to have met with more approval than hostility, though, and I’d argue that a driving reason for this is that less noisy, more malleable syntax enables the creation of domain-specific languages. Rails famously leverages Ruby’s syntax to create what amounts to a declarative language to describe things like entity relationships; other frameworks have tried to follow suit and been frustrated by their host language’s lack of flexibility. I won’t name names.
In CoffeeScript, As in
Ruby, Scala, OCaml, Haskell and Lisp, (almost)
everything is an expression. In simple terms, this means that
every language construct returns a value. An important consequence of
this is implicit returns: because every construct must return a
value, all functions must return a value, whether return
is called
explicitly or not. By default, a function returns the value of the
last evaluated expression. This, again, makes for a very concise and
expressive syntax, which feels like a function just is its evaluated
body:
1
|
|
This applies to conditionals too:
1 2 |
|
(Note that this is not ‘idiomatic’ CoffeeScript.)
Like Scheme, C#, Ruby, Python and JavaScript, CoffeeScript uses
coalescing operators. To JavaScript’s coalescing ||
and ||=
,
CoffeeScript adds a coalescing existential operator, ?
and ?=
. This is
particularly useful in combination with the ‘everything is an
expression’ behaviour described above:
1
|
|
It can also be used for simple memoization:
1 2 |
|
And even to conditionally call a function:
1
|
|
You may have noticed that most of the languages I’ve mentioned above are some distance from ‘modern’; Scala (released 2003) has the best claim. The others range from early adulthood (Ruby, mid-1990s) to spry seniority (Lisp, late 1950s). Why do aspects of their syntaxes, then, seem to contribute to a sense of modernity in language design?
I think this is quite simple: they don’t look, or act, too much like C.
C is a beautiful language, which does what it’s designed for very well: it interacts with von Neumann-architecture machines at a level appreciably higher than assembly language, whilst being highly performant, small, well-specified, and clear. But it became so popular that the languages that followed it were pretty much required to cargo-cult its syntax. What is Java but a traduction of Smalltalk with C-like syntax bolted on? What is JavaScript but a traduction of Scheme with Java-like syntax bolted on (by mandate, in this case)?
As a result of C’s (and its successors’) popularity, alternative
languages like Lisp and Smalltalk rather faded into the background;
their practical elements were absorbed, but their syntaxes were
repressed. One could argue that Perl brought some of their syntactic
ideas closer to the mainstream – I don’t know enough Perl to have an
opinion. But once you’ve written Ruby or Python or CoffeeScript,
all of them strongly influenced by more humanist languages like the
aforementioned, it’s hard to justify another set of braces, another
check for null
, another for i = 0;
. And there you are. Old world:
C. New world: not C.
The above is a small selection of things I want from a ‘modern’ language. Others might be (not in strict order):
setRed:Green:AndBlue:
points
to a different method than setRed:
or setRed:AndGreen:
. Ruby
2.0’s syntax strikes a nice balance, allowing arbitrary arguments in
addition to keyworded ones.Again, I’m not arguing what a ‘modern’ language should look like so much as observing my own prejudice; also, I’m well aware that there are horses for courses, and new(ish) languages like Go and Rust are intended for different purposes than Ruby or CoffeeScript.
I think it’s all to play for at the moment; and I think it’ll be a shame if yet another C-like language wins. My money was on Scala as a general purpose application developers’ language, but I fear it’s getting a bit astronautical. JavaScript is highly optimised, and has a (reasonably) POSIX compliant host environment – can we now consider using it for systems programming? And Ruby seems to be becoming the language of the Cloud. Perhaps we are doomed to, or blessed with, a pluralistic future.
]]>(error, data)
arguments, but I found
that tended to make the code a little circuitous. As the project
in question would largely be used on the front-end, I didn’t feel I
really had to stick to those conventions, so I looked into
jQuery’s Deferred object,
and especially its use of promises.
This isn’t a post about promises, so I won’t go too deeply into what
they are, but suffice to say they offer an alternative way to manage
asynchrony, with an arguably cleaner syntax; the Deferred
object
also simplifies chaining asynchronous functions, and provides a way to
signal when a group of functions has completed (when their deferred
s
have resolve
d, in the parlance). This is
jQuery.when.
when
is very useful; you pass it some functions (each of which must call
Deferred.resolve
on completion), and it tells you when they’re all
complete. Only problem is that the functions are called in
parallel. Now, as a default behaviour, this makes perfect sense: we
are talking about asynchrony after all. But in some cases, we want to
ensure that our functions execute in a particular sequence, each
function awaiting the completion of its
predecessor. jQuery.Deferred
, so far as I was able to find out,
doesn’t provide for such a use case.
There are of course other libraries which do – Async.js, for example – but I’d thrown in my lot with jQuery, and I was reluctant to import a whole library for just this small and apparently simple piece of functionality.
Now, at work, we’re lucky enough to be using CoffeeScript for almost all our Javascript-y business, and the solution to my problem turned out to be quite a beautiful little five-liner:
1 2 3 4 5 |
|
So, what’s happening here? Well, we pass an array of tasks (functions
which implement Deferred.resolve
, and return a promise
) to sequence
. Sequence then
executes the first task, and iterates over the rest, each time
calling Deferred.then
on the new value of seq
; the function passed
to then
calls the next task, which returns a promise
, and so
on. The sequence itself is returned; this value is in fact also a
promise
– the one returned by the last function in the sequence –
which allows you to act on the sequence’s completion.
Note the use of CoffeeScript’s do
keyword, which creates an
immediately-executing function. This is necessary to preserve the
scope of the index variable in the function passed to then
;
were we not to enclose the call to then
in an outer function, i
would remain scoped to the i
given in the loop declaration. When the
function using i
was called, it would of course take the value of i
at the time of being called, not at the time of the function’s
creation – which value is most likely to be the value of i
at the
end of the loop.
Of course, the simplicity of this code is bought at the expense of
error handling, of which there’s none: you’d at least want to check
you had an array of length > 0, to avoid exceptions, and it would make
sense to avoid the iteration altogether if the array were only of
length 1. Nevertheless, I think it’s a nice demonstration of both
CoffeeScript’s concision and $.Deferred
’s flexibility.