I’ve been banging away on Clojure for a few days now, and while it would obviously take months of study and grinding through a big serious real-world software project to become authoritative, I think that what I’ve learned is useful enough to share.
[This is part of the Concur.next series.]
1. It’s the Best Lisp Ever · I don’t see how this can be a controversial statement. Issues of language-design aside, every other Lisp I’ve worked with has been hobbled by lacklustre libraries and poor integration with the rest of the IT infrastructure. Running on the Java platform makes those problems go away, poof!
Let’s assume hypothetically that there are other Lisps where certain design choices are found to be better than Clojure’s. Well, you can pile all those design choices up on top of each other and the pile will have to be very high before they come close to balancing the value of Java’s huge library repertoire and ease of integration with, well, just about anything.
2. Being a Lisp Is a Handicap · There are a large number of people who find Lisp code hard to read. I’m one of them. I’m fully prepared to admit that this is a shortcoming in myself not Lisp, but I think the shortcoming is widely shared.
Perhaps if I’d learned Lisp before plunging into the procedural mainstream, I wouldn’t have this problem — but it’s not clear the results of MIT’s decades-long experiment in doing so would support that hypothesis.
I think it’s worse than that. In school, we all learn
3 + 4 = 7 and then
sin(π/2) = 1
and then many of us speak languages with infix verbs. So Lisp is fighting
uphill.
It also may be the case that there’s something about some human minds that has trouble with thinking about data list-at-a-time rather than item-at-a-time and thus reacts poorly to constructs like
(apply merge-with +
(pmap count-lines
(partition-all *batch-size*
(line-seq (reader filename)))))
[Update:] Rich Hickey provides some alternative and arguably more readable formulations of this code.
I think I really totally understand the value of being homoiconic, and the awesome power of macros, and the notion of the reader. I want to like Lisp; but I think readability is an insanely important characteristic in programming systems.
Practically speaking, this means that it’d be hard for me to go out there on Sun’s (or Oracle’s) behalf and tell them that the way to take the best advantage of modern many-core hardware is to start with S-Expressions before breakfast.
3. Clojure’s Concurrency Features Are Awesome · They do what they say they’re going to do, they require amazingly little ceremony, and, near as I can tell, their design mostly frees you from having to worry about deadlocks and race conditions.
Rich Hickey has planted a flag on high ground, and from here on in I think anyone who wants to make any strong claims about doing concurrency had better explain clearly how their primitives are distinguished from, or better than, Clojure’s.
4. Agents Are Better Than Refs or Atoms · I’m using these terms in a Clojure-specific way: Specifically, I mean agents, refs, and atoms.
Agents are not actors nor are they processes in either the Operating-System or Erlang senses. I’m not actually sure how big a difference that makes; my suspicion is that programmers probably think about using all three in about the same way, and that’s OK.
Anyhow, agents solve concurrency problems in the simplest possible way: By removing concurrency. Send functions to an agent and they’ll get executed one at a time in whatever order, taking the agent variable as their first argument, replacing its value with their output.
Here is an example. I have a map (i.e. hash table) called
so-far
in which the keys are strings and the values are integers
counting how many times each string has been encountered. If I use
refs
to protect both the hash table and the counters, I get code
like this:
1 (defn new-counter [ so-far target ]
2 (dosync
3 (if-let [ c (@so-far target) ]
4 c
5 (let [counter (ref 0) ]
6 (ref-set so-far (assoc @so-far target counter))
7 counter))))
8
9 (defn record [target so-far]
10 (if-let [ counter (@so-far target) ]
11 (incr counter)
12 (incr (new-counter so-far target))))
Let’s start with the record
function on Line 9. The
if-let
looks up the target in the hash, ignoring concurrency
issues with @
, and uses incr
to bump the counter, if
there’s one there. If there isn’t, it calls new-counter
to make
one.
Lines 3 and 4, in new-counter
, are where it gets interesting.
Since everything’s running concurrently, we can’t just go ahead and bash a new
counter into the so-far
hash table, because somebody might have
come along and done that already, recorded a few values even, so we’re at risk
of throwing away data. So after we’ve locked things down with
dosync
, we check once again to see if the counter is there and if
so, just return it. Otherwise we create the new counter, load it into the
hash, and return it.
On the other hand, consider the agent
-based approach; once
again we have a hash table called so-far
, but protected by an
agent. If the code wants to increment the value for some target
,
it says
(send so-far add target)
This will eventually call the add
function with the hash table
(not a reference or anything, the actual table) as the first argument, and
target
as the second. Here’s add
:
(defn add [so-far target]
(if-let [count (so-far target)]
(assoc so-far target (inc count))
(assoc so-far target 1)))
Considerably simpler, and nothing (concurrency-wise) can go wrong.
I do have one nit with agents. Most of my code was infrastructure; a
module that reads lines out of a file and passes them one at a time to a
user-provided function. At one point, I made some of the code that fixes up
the lines that span I/O-block boundaries agent-based, because it was simpler.
Unfortunately that code also calls the user-provided function and when one of
those also tried to send work off to an agent, everything blew up because you
can’t have a send
inside a send
.
Actually, I think my nit is more general; in an ideal world, concurrency primitives would all be orthogonal and friction-free. But anyhow it’s a nit, not an architectural black hole, I think.
5. Clojure Concurrency Does Buy Real-World Performance ·
The Wide Finder runs I was using to test were processing 45G of data in a
way that turned out to be CPU-limited in Clojure (I think due to
inefficiencies in Java’s bytes-on-disk-to-String-objects pipeline, but I’m not
sure).
So making this run fast on a high-core-count/low-clock-rate processor
was actually a pretty useful benchmark.
[Update: Now I’m sure that the bytes-to-strings thing is not the problem; I’m getting much better times, it’s an interesting story and I’ll write it up.]
The single most important result: Clojure’s concurrency tools reduced the elapsed run-time by a factor of four on an eight-core system, with a very moderate amount of easy-to-read (for Lisp) code.
6. Performance is Wonky But It Doesn’t Matter · Some more results:
The amount of extra CPU burned to achieve the 4× speedup was remarkably high, more than doubling the CPU of the whole job.
The costs of concurrency, as functions of whether you use refs, or map/reduce, or agents, and also of block-size and thread-count and so on, are wildly variable and exhibit no obvious pattern.
Well, agents did seem to be quite a bit more expensive than refs. But refs were pretty cheap; a low-concurrency map/reduce approach was not dramatically slower than doing the Simplest Thing That Could Possibly Work with refs.
These results are irrelevant. Remember, this is Clojure 1.0 we’re working with. If we determine that the throughput of the agent handlers is unacceptable, or that the STM-based infrastructure is consuming excessive CPU overhead, I’m quite confident that can be fixed. For example, we could lock Rich Hickey in a basement and put him on a tofu-and-lettuce diet.
7. The Implementation Is Good · I pushed Clojure hard enough to have a couple of subtle code bugs blow out the whole JVM, which takes considerable blowing-out on a Sun T2000. But the bugs were mine not Clojure’s. In the course of quite a few days pounding away at this thing with big data and tons of concurrency, I only observed one bug that I’m pretty sure is in Clojure, and then I couldn’t reproduce it.
Also, I never observed code in Clojure running significantly slower than the equivalent code in Java.
So if I’m wrong and there’s scope for a Lisp to take hold in the mainstream, Clojure would really be a good Lisp to bet on.
8. The Documentation Is OK · The current sources are Stuart Halloway’s Programming Clojure, Mark Volkmann’s Clojure - Functional Programming for the JVM, and of course the online API reference.
I used the book most, and while it’s well-written and accurate, it’s either missing some coverage or a little out of date, as I discovered whenever I published code and helpful commenters pointed out all the newer and better functions that I could have used. I also found the apps they built the tutorial examples around less than compelling.
Also, you can look through the source code, which is mostly in Clojure, and even for someone like me who finds Lisp hard to read, that’s super-helpful. But it’s clear that there’s good scope for a “Camel” or “Pickaxe” style book to come along and grab the high ground.
9. The Community Is Excellent · As I’ve already observed, the Clojure community is terrific; we’ll see how well that stands the test of time. I suspect I may linger around #clojure even when I’ve moved on to other things, just because the company’s good.
10. The Tools Aren’t Bad · I used Enclojure and I recommend it; having it set up and manage my REPL was super-convenient, and it never introduced any bugs or inconsistencies that I spotted. It’s also very early on in its life and there are rough spots, but really it’s good stuff.
I gather that rather more people use Emacs and some favor of SLIME, and I’m sure I would have been just fine with that too.
11. Tail Optimization Is Still a Red Herring ·
I wrote admiringly in
Tail Call Amputation about
the virtues of Clojure’s recur
and loop
forms, as
opposed to traditional tail-call optimization. This is clearly a religious
issue, and there’s lots of preaching in the comments to that piece. I read
them all and I followed pointers, and here’s what I think:
Clojure’s loop
/recur
delivers 80% of the value of
TCO, with greater syntax clarity. Clojure’s
trampoline delivers 80% of the
remaining 20%.
Near as I can tell, that leaves state-machine implementation as the big outstanding case that you really need TCO for. I’ve done a ton of state-machine work in my career, and while I recognize that you could implement them with a bunch of trampolining tail-called routines, I’ve never understood why that’s better than expressing them in some sort of (usually sparse) array.
So, my opinion is that post-Clojure, this argument is over. I suspect that this will convince exactly zero of the TCO fans, probably including Rich Hickey, and that once again the comments will fill up with people explaining how the real conclusion is that I don’t actually understand TCO. Oh well.
Thanks! · To Rich and the community for welcoming me and helping. I stuffed my code fragments into the SVN repository at the Kenai Divide and Conquer project; they ain’t pretty. If anyone wants to have a whack at the big dataset, send me a hail and if I think you’re serious I’ll get you an account.
The quest for the Java of Concurrency continues.
Comment feed for ongoing:
From: Phil (Dec 01 2009, at 16:59)
> Agents Are Better Than Refs or Atoms
This strikes me as kind of a funny thing to say.
If you need real-time consistency, refs are really the only way to go. I guess maybe what you're getting at here is that for most concurrent problems you might think you need the ACI properties that transactions provide, but (at least in batch-process contexts) you can get the job done more simply with agents. Is that the idea?
[link]
From: Janne (Dec 01 2009, at 19:53)
I really think a major problem with lisp-like languages readability boils down to them being prefix languages. They tax your short-term memory.
Stack languages are almost unfairly easy to grasp despite looking like line noise; it's because you can read the code witho0ut keeping stuff in mind:
"we got these values, and we do this operation on them - ok - then do that operation on the result, then call this function with that in turn ..."
Prefix languages require you to keep the entire expression on your own internal stack, something we don't do well:
"do operation on this expression (which is this op on this value and (this recursive call) which gets us, umm, this result?) and this expression (...) aaand, what was the result of that first expression now again?"
I find it hard to look at a prefix-form expression and figure out what it's actually doing, simply because it takes a large amount of effort for me to trace it in my mind.
[link]
From: Alexy Khrabrov (Dec 01 2009, at 20:26)
Tim -- thank you so much for exploring Clojure for concurrency! I followed the original Wide Finder and then Wide Finder 2 with great interest, and Clojure redux is superb, especially the contributions from Technomancy and John of Milo.
I hear you on Lisp readability -- am more in the Scala camp on that one -- but am trying to love it. Although you can't really learn to love; the question is, whether it's nurture or nature. Perhaps one's either born with homoiconic preferences or not. (Then if you don't love it, you just don't.)
I saw WF2 has a pretty good Scala result, apparently better than Clojure's -- and that's a year-old Scala. Since you chalk it up to Java Unicode, why then Scala squeezes it better? Unless I mis{read,remember} the numbers.
[link]
From: Duncan Mak (Dec 01 2009, at 22:35)
I don't understand why you chose that line of code with COUNT-LINES as particularly tricky S-Expression syntax.
(apply merge-with +
(pmap count-lines
(partition-all *batch-size*
(line-seq (reader filename)))))
Syntactically, you can get back to C-Style just by moving some parens around, and sprinkle some commas.
apply (merge-with,
+,
pmap (count-lines,
partition-all (
*batch-size*,
line-seq (
reader (filename)))));
Does that make it any easier to read?
(I don't think so, because even in traditional C syntax, function application, like your example of a trig function, is a prefix-notation).
The only syntactic hurdle I see is "1 + 1" vs. (+ 1 1), but once you get over that, that's nothing more, isn't it?
I think the recent success of Clojure has shown that S-Expression syntax is *not* the thing that makes it 'unreadable', plenty of people are groking it and using it.
The difficulty for newcomers, IMHO, stems from the use of higher-order functions and function combinators.
Programmers in C-like languages like to name a lot of variables, so they can use it to mutate its value one step at a time; with FP languages, one would just write out the entire operation as a single expression. This change of style could be difficult for those not accustomed to it, but it has nothing to do with syntax per se (see ML, Haskell, etc).
I guess my point is this:
I do not disagree that your example is quite tricky, it certainly takes some time to digest what it's doing - but I'd very much rather take that time to figure out 4 lines of code than to read 10x LOC to get at the same thing.
[link]
From: Antti Rasinen (Dec 01 2009, at 23:20)
The readability thesis consisted of two points. The first about Lisps being prefix-languages is somewhat valid. The second about many-vs-one-values processing is not.
I'll process the value-question first. There are several languages that allow you to manipulate data on a sequence or array level. In Matlab you have matrices, in Python iterators, in Haskell lazy sequences. Achieving readability with these requires some practice and some exposure. In other words, it is a skill to learn.
There are some cases when item-by-item processing gives you the neatest, i.e. most readable, result. This can happen when the language toolset does not have the reach to express what you want on a higher level. (I'm looking at you, Matlab.)
Often the high-level abstraction is the better one. How can you possibly improve (map square xs)?
The readability problems of the (apply ...) snippet stem mostly from the fact that it does everything from reading the file to giving you the result in one expression.
You can improve it very easily with a let or two. For example, I'd probably name the (partition-all ...) form as batches and perhaps (pmap ...) as results. The data changes its form with these operations. Naming them seems appropriate.
Back to the first point. Prefix sucks. A-MEN BROTHER! Prefix sucks for maths. A bit.
Simple expressions are easy to read in any form. (+ 1 2) vs 1 + 2 vs 1 2 +.
Slightly more complex expressions are harder to read in prefix. (/ (- (f (+ x h)) (f x)) h) SAY WHAT?
Very complex expressions are hard to read in any form.
The solution to any math readability problem is to abstract your problem into a higher level. You create functions for mean, std deviation and curtosis. Even your example used the sine function instead of the corresponding Taylor series.
The difference with prefix and infix is that you encounter the abstraction limit somewhat sooner.
For everything else than maths, most programming languages use the prefix notation. Consider printf!
For many OO implementations you can argue both ways: x.foo(a, b) is either prefix form of (x.foo a b) or an odd infix operation x `foo` (a, b).
There. Good abstractions and proper naming when necessary reduce the thesis 2 into a very insignificant issue.
[link]
From: Attila Szegedi (Dec 02 2009, at 01:36)
I have the same problem with readability, and I also admit the problem is with me, not with the language.
I found that when trying to comprehend a LISP statement, the easiest for me is to work from inside out - understand what result does the innermost expression produce, then understand how the one outside of it transforms it, and so on.
This helps me understand the code; it doesn't help much in writing code though... :-(
[link]
From: Bryant Cutler (Dec 02 2009, at 01:51)
A nitpick, I know, but I think you mean the "Pickaxe" book and not "Pitchfork", unless there's some new book floating around I'm not aware of.
Also, I think you bring up a great point, in that there's an inherent conflict in computer language syntax. Regularity and consistency make macros possible, but cause everything to look too visually similar for humans to parse well; conversely, languages that have enough visual differentiation between syntactic constructs end up being difficult to parse or transform programmatically. Do you think there's a happy medium? or are human-comprehensible macros impossible without SExps?
[link]
From: Gavin (Dec 02 2009, at 02:01)
It's not that it's impossible to read LISP like expressions per se.
It's that having done so - doesn't help you much when you return to the code next time around.
I find myself similarly disoriented all over again - and unable to profit from any memory of my previous reading.
[link]
From: Rich Hickey (Dec 02 2009, at 04:38)
Thanks again, Tim, for your coverage of Clojure!
One problem some people have with the example in #2 is its inside-out nature. You can write nested calls in any language, and people do, up to a certain degree of complexity. Then they will (and should) switch away from 'inside out'. The same is true in Clojure, which supports multiple sequential styles cleanly.
I've put up some variants, in inside-out, pipelined, and step-wise Clojure styles:
http://gist.github.com/247172
Most Clojure programmers will be familiar with and mix these three styles, preferring, I hope, the one that makes the code clearest in a given situation.
As far as collection-at-a-time vs item-at-a-time, that's not a Lisp thing, and people are going to have to get over it. That is how parallelized libraries are going to work, in every language.
Rich
[link]
From: dnolen (Dec 02 2009, at 06:05)
As Rich Hickey notes there ways to improve readability if that's a concern, his examples follow:
(->> (line-seq (reader filename))
(partition-all *batch-size*)
(pmap count-lines)
(apply merge-with +))
or
(let [lines (line-seq (reader filename))
processed-data (pmap count-lines
(partition-all *batch-size* lines))]
(apply merge-with + processed-data))
[link]
From: Patrick Logan (Dec 02 2009, at 06:29)
"agents solve concurrency problems in the simplest possible way"
Not sure by what measure agents are simplest among clojure's STM mechanisms. Consider agents are asynchronous while atoms are synchronous. I would claim atoms are simpler for that reason.
Too bad about the parentheses thing. I will never understand why so many people have such a fear of sexprs. But I have come to accept that. I suppose not even the majesty of clojure will overcome that reality.
[link]
From: Dmitriy V'jukov (Dec 02 2009, at 07:54)
It seems it becomes a kind of a poll, just my 2 cents:
I can't read that mix of braces and words. No idea as to how I must interpret that.
[link]
From: Robert Young (Dec 02 2009, at 08:28)
@Atilla:
I found that when trying to comprehend a LISP statement, the easiest for me is to work from inside out - understand what result does the innermost expression produce, then understand how the one outside of it transforms it, and so on.
This helps me understand the code; it doesn't help much in writing code though... :-(
The same thing happens with SQL (non-trivial). A decent editor, which outdents automagically will solve the problem. I use SlickEdit, and it serves.
@Rich:
As far as collection-at-a-time vs item-at-a-time, that's not a Lisp thing, and people are going to have to get over it. That is how parallelized libraries are going to work, in every language.
May be "people" will stop picking on SQL and the relational database once language designers force them to think properly. He he, Codd was right.
[link]
From: Robert Fisher (Dec 02 2009, at 09:01)
I was raised on C and came to Lisp fairly late. I’m very curious about why so many people seem to find the syntax difficult. Prefix notation is so often cited, but I have a hard time believing that is really a significant factor.
C (and similar languages) are predominantly prefix. e.g. mpz_add(x, y, z);
Imperative English is prefix style. e.g. Add X and Y.
A Lisp macro can give you infix notation. So, Lisp can not only add the little bit of infix notation that C has, you can use such a macro to apply infix notation in places C doesn’t allow it.
(Besides, a good Lisp will have arbitrary precision math built-in so that you can just say “(infix x + y)” instead of “(infix x arbitrary-precision+ y)”.)
[link]
From: Matěj Cepl (Dec 02 2009, at 09:41)
I freely admit that I drunk plenty of Mozilla cool-aid lately, but it seems to me that if you consider Javascript as a functional language (see http://javascript.crockford.com/javascript.html of course), then it might be pretty decent alternative to all these Lipsish alternatives. Especially readability looks really promising.
[link]
From: Leon P Smith (Dec 02 2009, at 12:40)
Once upon a time, I ignored Lisp because of it's syntax. If it wasn't for Standard ML, I probably wouldn't have ever gotten on the Lisp bandwagon.
So I can sympathize, but at this point your snippet took no effort for me to read. It took considerably more effort (but was still easy) to guess what the primitives did, since I haven't really tried Clojure yet.
Indentation is the key to reading Lisp code, and a text editor that supports auto-indenting and re-indenting is essential to working with lispy syntax. I can tell your sample is a function pipeline by the shape of the code.
Assuming a suitable compiler, a state machine implemented via tail calls will have it's state encoded in the program counter. An array-based implementation would need a little interpreter to drive the machine.
Its a tradeoff, tail calls might lead to a faster implementation, while a table might be a bit more compact. It's also not a all-or-nothing proposition, hybrid approaches are quite easy.
Code written in the continuation passing style is another major justification for TCO.
[link]
From: John Cowan (Dec 02 2009, at 13:38)
In the Lucid family of languages, everything is a stream, even constants: the value of 1 + 2 is 3, 3, 3, .... Any old Unix shell hacker really shouldn't have a problem dealing with data by the stream rather than by the item.
[link]
From: Tony Garnock-Jones (Dec 02 2009, at 14:20)
Tim, you may be interested in the following blog post from Guy Steele about why properly implemented tail-calls are necessary to preserve proper object-oriented abstractions: http://projectfortress.sun.com/Projects/Community/blog/ObjectOrientedTailRecursion
[link]
From: Eric Dobbs (Dec 02 2009, at 17:07)
I totally disagree that being a dialect of lisp is a handicap. In fact, I speculate LISP is related to excellence you find in the community and that both are related to the solid implementation and real world performance.
Several thoughts about readability from a professional perl programmer with some background in LISP.
If you spend your days writing in a more functional style, or in an even mix of functional and object oriented, reading these lisp idioms is pretty easy (as already mentioned by @leon -- indentation and function pipelines). I also don't know clojure, but the idioms are familiar and I can make educated guesses about the parts I don't know.
I think the density thing is more interesting, though. The time it takes to study a few lines of LISP probably compares favorably to studying the many lines of code one would write in another language to accomplish the same computation. But you do have to stop a moment and actually study the code a little bit. Familiarity with the idioms (like function pipelines as @leon called them) makes that much faster. In the long run there are just fewer words to read and the idioms become quite readable.
A related point is about familiarity with the tools. In LISP, your editor makes it easy to navigate the s-expressions which makes it trivial to see how things are nested. This is similar to the way java IDEs make it trivial to navigate the class hierarchy -- to descend into a given method call and pop back as needed. Understanding how a function pipeline works is not unlike understanding how messages are passed between objects.
Introductory examples in books and on the web are always too simple to express the magic. These abstractions and idioms only really become interesting when they make a particularly complex problem more readable. But complexity is NOT what you aim for in introductions. That is the main barrier to more widespread understanding and appreciation of LISP. Getting your head around the idoms means wrestling both with unfamiliar abstractions and with the complexity in the problem domain.
I'm not even remotely surprised that you found some short LISP examples to take on the nastiness in this domain.
[link]
From: Public Farley (Dec 02 2009, at 19:07)
Thanks Rich for weighing in on the readability "issue". Almost every function that I write in Clojure takes the ";step-wise, with labeled interim results" form and I find it very readable. Almost self documenting.
Sprinkled with appropriate comments (which I find many Clojurians don't do) and I find the code just as readable as Java, Ruby, etc.
Sometimes I think of Clojure as a secret weapon. The average developer's aversion to surrounding parentheses and prefix notation prematurely turns them away from an awesome programming experience.
[link]
From: larry (Dec 02 2009, at 21:58)
I find it strange that people complain about the syntax of lisp when the syntax of html is pretty much the same as S-expressions, yet when the web was new everybody and their grandmother were writing html without a whimper of a complaint. I guess that was because HTML was the language of the web and the web was cool so HTML was cool.
In fact HTML and XML is arguably harder to read than the corresponding S-expressions.
I think I've read somewhere that it was easier to teach someone lisp than Java because the syntax was so much simpler and more consistent.
[link]
From: Vladimir Sedach (Dec 02 2009, at 22:31)
"Well, you can pile all those design choices up on top of each other and the pile will have to be very high before they come close to balancing the value of Java’s huge library repertoire and ease of integration with, well, just about anything."
I don't know why people believe this stuff.
If you are not working on the JVM, you absolutely do not care about Java libraries. You want to easily link to C libraries. No one is pulling up a JVM just to use a Java library.
This is also the reason why there are not very many high-quality (note I said high-quality) 3rd party (I really like the Java Class Library though) Free Software Java libraries out there.
Java integration is a one-way street. If you're not using Java already, there is no way you're pulling it into your project on just a library/FFI (or even sockets) basis. That kind of integration only ever seems to get done on a file or network basis (ie - whole programs).
It must be said that this is not Java's fault - this is the reality for all languages other than C.
"I want to like Lisp; but I think readability is an insanely important characteristic in programming systems."
The question you are looking to answer is "why am I having a hard time reading Lisp code?" This leads to another question: exactly which Lisp code are you reading?
The argument about Lisp being inherently hard to read is an interesting but completely baseless hypothesis - first are the myraid people who serve as counterexamples, second people's proven ability to become proficient in any human-created symbolic system with practice.
The key word here is practice - you need to read other people's good code, and you need to read it just as much if not more so than writing your own.
The flip side of this issue is the comparative readability debate. C is easier to read than Lisp, right? Why is C so easy to read? It must be because it was designed to be easy to read. Well, not really. C borrowed syntax from B borrowed syntax from BPCL ... borrowed syntax from Fortran which had to fit onto IBM punched cards.
The joke's on you, language syntax snobs!
"They do what they say they’re going to do, they require amazingly little ceremony, and, near as I can tell, their design mostly frees you from having to worry about deadlocks and race conditions."
Yup, just like object-oriented programming freed you from worrying about your domain design problems!
Deadlocks (and in some instances, race conditions) are also a domain problem. What tools like STM do is just elevate the level at which you are making mistakes. Which is a great thing, but not the same as eliminating the possibility of mistakes altogether (which is impossible to do, because this is a problem specific to your domain/application).
"This is clearly a religious issue, and there’s lots of preaching in the comments to that piece."
No, this is a clear technical issue. You need continuations to express concurrency sans parallelism (ex - asynchronous I/O) in a composable way. If your language does not come with first-class continuation or monadic programming support, it needs tail-call elimination to make continuation-passing style possible (and by possible, I mean possible for non-trivial programs).
If you lack both continuations and tail call elimination, it is impossible to for example implement AIO libraries/servers without exposing either their state machine or hand-coded CPS internals (callbacks) to the library users. This is exactly what node.js has been forced to do. Trampolines will not help you here.
[link]
From: Hans (Dec 03 2009, at 08:47)
I love clojure, but don't yet have a really good handle on it or any other lisp. I had hoped to use it for my research but a limitation of Java (no complex number type) made it infeasible and so I was stuck with matlab.
Java is a great strength, but it's not well-suited for all domains and is therefore one of its weaknesses too. But of course everyone knew that already.
[link]
From: Rob Jellinghaus (Dec 03 2009, at 13:08)
Well, Guy Steele makes a good stab at explaining the deeper reasons why true TCO is A Good Thing:
http://projectfortress.sun.com/Projects/Community/blog/ObjectOrientedTailRecursion
Worth reading, anyway... he's always an excellent explainer :-)
[link]
From: John Williams (Dec 04 2009, at 11:37)
Thanks for the article! I'll probably refer people to it in the future when I want to explain what Clojure is about. I think it would be more accessible, however, if your example in part 4 included some mention to the relationship between refs and dosync, with a link to <http://clojure.org/refs>.
Also, your link from "trampoline" seems to be broken. The correct URL is
<http://richhickey.github.com/clojure/clojure.core-api.html#clojure.core/trampoline>.
[link]
From: dasuxullebt (Dec 05 2009, at 04:04)
Clojure brings a few concepts of Lisp to the Main-Stream. But I myself dont really like it, and I know that a lot of other people dont. Thus, it is definitely argauble whether its the best Lisp ever.
It is one approach. One approach some people like and some people dont like. And I am one of the people who dont really like it (no offense - I still like it better than most other programming languages, but not better than other lisps).
There is JScheme and ABCL which do both run on the JVM, the only difference is that Clojure is explicitly designed for the JVM, which makes it better-integrated, but you can use any Java-Library with ABCL, too.
And I dont see the point: Most native scheme-compilers have an excellent FFI, and most Common Lisp Compilers support the CFFI, so you can write bindings for any C-Library. The only reason why this isnt done so often is that people mostly want a library that is "Lispy", so they have to put a lot of work to wrap around the pitfalls of memory-management and stuff.
But to be honest, even in Clojure one wants to write wrappers arount Java Libraries - because if you just use a Java-Library in Clojure, without writing a wrapper around it, then you dont use Lisp - you just use Java with Lisp Syntax.
[link]
From: JSR (Dec 05 2009, at 17:48)
@Vladmir
"This is also the reason why there are not very many high-quality (note I said high-quality) 3rd party (I really like the Java Class Library though) Free Software Java libraries out there."
Really? So these do not count?
* Spring
* Google collections
* Apache commons
* Junit
* jbehave
* Ant
* Joda-time
* And the list goes on...
I thought that high quality 3rd party libraries were one of the benefits of Java over other languages (c.f. Microsoft CLR).
[link]