Emailer: "ITA [Software Inc.], the other alleged Lisp success story.. also says much the same thing.. they don't really use Lisp except for certain key things"
PG: "Using a language for these "certain key things" is exactly what it means to "really use" it."
I thought this was a terrific observation about language wars. You can build a website in Ruby or PHP or Python etc., without really using the language. If your code ports nearly line for line from say, A to B, you're probably not taking advantage of the benefits of the language(s). You are using the language, but not really using it.
A more concrete example:
You can build a website in PHP that also has a chat room written in node.js.
In this case you are really using both languages.
Alternatively, you can build a website in node.js that also has a chat room written in PHP.
In that case you are using both languages, but not really using either.
Do you think that this reallyusing argument is more or less a tautology?
(1) If people who wrote their website in Lisp (Viaweb, ITA, Reddit) continue using Lisp, then they are smart and Lisp is powerful.
(2) If those people (or new people) rewrite the website in a different language (C++, Python), then either (a) the new people are dumb (Lisp is too powerful for them!) or (b) the website was not really using features not in the new language (the powerful Lisp features were not used!).
EDIT: As noted by pnathan below, maybe it is more of a No True Scotsman.
I don't think it's a tautology, just getting too hand-wavy on an important point.
What does it mean to really use a programming language? I take it to mean that your task aligns with the strengths of the language. I can use the butt of a screwdriver to drive in a nail, but hammers certainly are better at the task. Programming languages are similar, but defining the strengths of the languages and the needs of your task are monstrously more complex.
One strength, and weakness, of Lisp is its malleability. The language itself can be turned into almost anything you want or need. If your purpose is to demonstrate possibility, few tools are better. At this point you can really use Lisp, because at this point you are free to exercise the strengths that only Lisp has.
Once that possibility is known, though, this strength becomes a form of weakness. The project should now move with purpose, methodically exploring the problem space. You draw attention, hopefully money, and with it more team members. What was a single vision wrought in code now must be the coordinated effort of a large group. You can no longer really use Lisp because the group is too large and sharing the knowledge of a newly built language feature takes more time than the feature saves. So, you stop really using Lisp and just use it, carefully selecting the subset of tools that allow people to work together.
Or, you rewrite, in some language whose strengths are more in line with the needs of your now larger project. Then you are really using that language. I don't see tautology or contradiction here, only shifting requirements that we fail to acknowledge.
Let's say you are sitting at your desk and your boss asks you to code a solution to a problem, let's call it Problem Zebra.
If you flip open your laptop, roll a die labeled with 6 programming languages that you know, and then write code in whatever language you roll, that solves Problem Zebra, that is "using a language".
If you flip open your laptop, and before you start typing you think, "I need to solve Problem Zebra, which is a type of Mammal Problem, and Language Y is often used to solve Mammal Problems because the designers of Y added constructs A and B specifically to help solve Mammal Problems (because that's the genre of problems they focus on at work)," and then you write code in Language Y and use A and B, that is "really using a language".
This is basically a manifestation of the "LISP Curse" - high level functional languages are sufficiently powerful that small teams of really smart people (like us) can crank out amazing applications with relatively low effort.
However, to maintain and improve these applications, you need to maintain and grow a pool of really smart people, which is something of a challenge for large companies. We tend to be like cats - very fascinated for short periods of time but unwilling to listen to non-technical direction issued by a bunch of MBA's.
It's far easier - for a non-technical person - to assemble a horde of average developers using a pre-packaged toolkit (Java, Microsoft, SAP, or enterprise software packages) and boss them through some form of digitized process map. Let them hire a bunch of their B-school buddies at ACN or Mckinsey to "faciliate" the deployment, for $300 - $500 per hour, and it's a lock.
Big companies don't care about technical excellence - they care about ensuring the company, not the developer, controls the code. The truth of the matter - developers are not created equal and you will get a lot farther in a small group if you let your best person carry the ball.
Unfortunately we're comparing archers and crossbowmen - the archer took YEARS to train but delivered (for his time) an awesome rate of fire with good accuracy. The Crossbow allowed you to herd together large groups of average peasants and overcome the archer by sheer weight of numbers.
Pure Paul Graham:
"Do you have any idea how long the future is? Do you really
think people in 1000 years want to be constrained by hacks
that got put into the foundations of Common Lisp because
a lot of code at Symbolics depended on it in 1988?"
In my CS degree we never learned about how language will evolve. We learned how they work, how to write new ones, how to translate between them, but never how to future-proof them.
But right now it looks like C will remain the language of the future :)
In a "Long Now" talk [1], science fiction author Vernor Vinge mentions the "software midden heap", the layers of software standing on the shoulders of and papering over the bugs of earlier software layers. Will anyone be able to excise rotting middle layers without breaking software compatibility?
Another example is from Vinge's novel A DEEPNESS IN THE SKY. He describes a software system thousands of years in the future that still uses the time_t epoch. None of the system's space-faring users know the significance of the date 01970, though a few believe it is the date when computers were invented. :)
20 years after the "discovery" ;) of UNIX, Linus began developing a Linux from scratch. Linux is now 20 years old. Is some college student writing a new operating system that will cut the cruft from Linux? Even Linux is far from a clean slate design because it embraces (and extends) POSIX APIs and UNIX conventions.
Will we still be using Linux (or Linux derivative) in 2050? 2010?
[1] Vernor Vinge: “What If the Singularity Does NOT Happen?”
None of the system's space-faring users know the significance of the date 01970, though a few believe it is the date when computers were invented. :)
And some thought we measured time from the date of the first moon landing.
More relevantly in the book is the class of people known as programmer-archaeologists who specialise in digging through libraries of software looking for code that already does what they need.
The future is also too long to assume our current generation languages will define it. While Lisp is a major step forward, I think it is by no means the end of language progress in the current couple of decades, let alone 1000 years.
It might win hands down now, but probably not for a 1000 years.
Lisp certainly includes certain 'timeless' aspects, such as lambdas and homoiconicity, but other features (like conses everywhere, IMO) are simply implementation details of the time Lisp was first created.
I'm concerned that Lisp advocates, by repeatedly saying that Lisp is the final step in language design, the ultimate language,...etc might distract some from looking for additional, newer, insightful concepts in computation and language design.
Cons is an implementation detail that no longer exists in reality. Most Lisps implement conses as arrays, and some Lisps, like Dylan, don't even use them to represent structure.
No programming language will survive a 1000 years. Not many works of science or art, or even human language have.
Programming language research has already grown past Lisp itself, or any other programming language, really. What we have now are logical models, their derivations and their proofs. If any of them can be materialized in an implementation, made practical, or even commercialized, great. Otherwise PL research is happiest as .. logic.
Edit:
اخ محمد، عجبتني رؤية لغتك البرمجية العربية. قمت ايضا بأنشاء بضعة لغات برمجبيات عربيات، معظمهن كلغات شبه لسبية :-)
Well, there's certainly a difference in scale here. If you have a shop where no one knows Erlang and everyone's experienced in Python, and your conversion project can be knocked out in 2 weeks by a fresh intern, well, I think the time savings are pretty straightforward.
X developers x Y time to "really learn Erlang" = Z > 2 weeks
I love programming languages and I never miss an opportunity to learn a new one, but I think that anyone who thinks that a programming language can make or break a company doesn't have much clue on how the industry works.
I have a lot of respect for pg but the successful sell out of his company had a lot more to do with "the right product at the right time" than it had to do with Lisp.
I'm also very disappointed with this kind of condescending claims:
"The reason they rewrote it was entirely that the
current engineers didn't understand Lisp and were too
afraid to learn it. "
Or maybe, just maybe, they know a little bit more about the pros and cons of using a mainstream language to ship a product in terms of staffing and tooling and they decided that Lisp just didn't clear that bar.
"We chose Lisp. For one thing, it was obvious that rapid development would be important in this market. We were all starting from scratch, so a company that could get new features done before its competitors would have a big advantage. We knew Lisp was a really good language for writing software quickly. [..] And because Lisp was so high-level, we wouldn't need a big development team, so our costs would be lower. [..] with Lisp our development cycle was so fast that we could sometimes duplicate a new feature within a day or two of a competitor announcing it in a press release." - http://paulgraham.com/avg.html
It was the advantages of using Lisp which gave them the ability to create the right product at the right time, instead of the right product too late, or dreams of the right product, too expensive to hire the team to build it.
The claim isn't really that Haskell might be a success and F# a failure, but only that if you can take advantages of more powerful languages power, you gain some advantages over companies which use less powerful languages - but not a guaranteed make or break difference, just another bit of help.
Given that he's gone from successful founder to successful investor mentoring several thriving startups, your claim that he can't "have much clue how the industry works" feels hollow.
Or maybe, just maybe, they know a little bit more about the pros and cons of using a mainstream language to ship a product in terms of staffing and tooling and they decided that Lisp just didn't clear that bar.
Or maybe, just maybe, the guy who wrote it and was there when it was rewritten knows more accurately what happened than you do. It isn't condescending if it's true.
> It was the advantages of using Lisp which gave them the ability to create the right product at the right time
Obviously, because they used Lisp, they were able to produce the product at time t, and any other language would have produced it at a different time t'.
I claim that they had no idea that t was better than t'. Nobody can.
Sure, that's one possibility. Other possibilities:
Or maybe, just maybe, they didn't understand the pros and cons of rewriting a mature codebase from scratch.
Or perhaps they couldn't make the lisp code into an indecipherable mess that made them look intelligent for having written it like they could with C++.
Or perhaps they believed that too many parens would make the curly brace Gods smite them.
ironically, Yahoo Mail is now rewriting chunks of their C++ backend in Java, because thats what new grads are taught these days instead of C++.
Back in 2003 pretty much all of ymail was in C++ with this crazy templating language that generated C++ code. Now it's all PHP and Javascript on the frontend. I assume that's true for other properties as well but i don't know first-hand.
"10: An interesting side-note. While the LMAX team shares much of the current interest in functional programming, they believe that the OO approach provides a better approach for this kind of problem. They've noticed that as they work to write faster code, they move away from a functional style towards OO style. Partly this because of the copying of data that functional styles require to maintain immutability. But it's also because objects provide a better model of a complex domain with a richer choice of data structures"
The world of software is so large and varied that I no longer can just believe the word of experts can be applied to other domains that they probably are not expert or have experience in before. PG with his LISP/Scheme world. Joel Spolsky with his Microsoft background. Martin Fowler with his OOP, Modeling ecosystem. Joshua Bloch with software correctness. Linus Torvalds with his embedded/OS island.
Are the things they wrote and spoke, within their domain, correct/make-sense? Perhaps.
Can we apply their opinions to other domains? Probably not.
I also feel that it's weird when a person always say "use the right tools for the job" but always push for his/her favourite programming language as if the language is always the best tools for anything that pops up in HackerNews. It feels like someone trying to put a square peg in a round hole.
Then there's the "let's burn the Agile Consultants" movement around here while promoting alpha-geeks that always change their programming language or web framework or their app-server of choice once every 2 years.
Some people don't care with the next programming language probably because they're busy learning Data Mining (Math & Stats), SOA, REST, HTTP protocols, SEO, CSS/HTML, administering Linux server, designing a database model, automating their testing effort, writing i18n-compatible apps, building robust code (in the programming language of their choice). Pretty much getting better in the situations that don't necessarily involve learning a new programming language.
Maybe programmers prefer to learn new programming languages as opposed to other things because learning a programming language is easier as opposed to learning Data Mining. Most programming languages are similar in a way they have a subset of common features: functions, type systems, object/classes in some, pointers/reference, iteration/loop, array. It's probably because we're human: we prefer things to be instant.
I don't think Ruby is more productive than Java in almost all situations. I'll give Rails the credit for being more productive for a simple CRUD apps if you compare it to most Java frameworks (the newer ones like Play, Spring MVC, Spring Roo, and a slew others are on par).
Maybe we shouldn't judge people based on their knowledge of more programming languages.
Yes, if it's true, it is (it's PG's assessment). But since they wrote an interpreter for Lisp in C++, they couldn't have been that afraid, at least not all of them. I guess that's what they "gained", that only a few adventurous of the engineers at Yahoo "had" to learn Lisp and the rest could continue program in C++.
But since they wrote an interpreter for Lisp in C++, they couldn't have been that afraid, at least not all of them
Another interpretation is that this is like the “comb-over” syndrome. How does a middle-aged man end up with a comb-over, given that everybody knows it’s a lame look? The answer is that it creeps up on him day by day, there is no one moment in time when he asks himself, “Should I comb my hair over?”
Likewise, whenever you see a Greenspunned[1] system, there is rarely a moment in time when some bright and ambitious person decided that the right thing to do was to sit down and implement half of Common Lisp. Instead, they make a series of small daily decisions (add this, change that, add such-and-such a feature, whoops, to fix that bug we need to do this...) that end up accreting into an interpreter for Lisp.
I don’t know the whole story, perhaps they did decide to write such an interpreter right from the start. But it could be that they initially decided to write something very small and specific, but wound up with a comb-over.
> I have to ask though: Is Greenspunning such a bad thing?
Yes, because if you knew you needed Common Lisp, you'd just use CL, rather than an ad-hoc informally specified implementation of CL. In general, the half implementation has half the value of the 'real' implementation.
Only the RTML page templates were left as Lisp. That was very simple code. Plus they pretty much had no choice about leaving at least that much of the program in Lisp: individual users had created their own custom templates (through an easy template editor that generated Lisp on the back end), which means porting the templates would have meant testing not just the default templates but each user's custom templates as well.
Hi Paul. Since you are, maybe you can shed some light on how the rewrite actually happened? Was it the comb-over syndrome that raganwald described in a sibbling comment to yours or was it a more conscious decision?
The two are not mutually exclusive. Most people don't realize they are implementing a lisp interpreter in their massive rewrite, yet after the fact, to anybody who is familiar, it shows up right away.
I don't think it is sad, or even true for that matter. It is just the talk of someone who treats a programming language as a religion.
Why would Yahoo want to keep this one property written in Lisp around while the proficiencies within the company (and the general talent pool outside the company) did not include Lisp?
It has nothing to do with being "afraid" of Lisp, it simply didn't make sense to have one random Lisp property. I am pretty certain if Google made this acquisition, the site would be rewritten too.
To make it clear - you could replace lisp with erlang/haskell/forth and it still would be sad. I am still student and I see that many people who are still in college (so their programming habits are relatively new and shallow) are reluctant to try new technologies. I am aware that good projects are not about choosing cool, full of buzzwords like 'monad' language, but still, most people want to stick with tool they learned first for everything, which is sad.
At some point a balance has to be struck. If you spend all your time trying out new technologies, you never really get to know any of them very well, and your productivity is constantly hampered as you relearn the ropes on every project. I am all for learning new things, but I think there are definitely cases where it's reasonable to pass on rewriting everything just because someone thinks that "smart people use language Y, not X", and "X developers are too dumb to use Y".
This is how pg interpreted it. I wouldn't be surprised if the real reaction was more about _disgust_ rather than fear: I personally do like the functional programming and everything-is-a-symbol paradigms of Lisp, I just hate THOSE DAMN PARENTHESES and would probably try hard to avoid using Lisp at my workplace.
Some people have the same reaction when confronted with whitespace/indentation in Python, doesn't mean they're afraid to learn it.
I'd say that "functional programming" and "everything-is-a-symbol" are not really essential in (and actually quite far from) the actual paradigm of Common Lisp. Modern Common Lisp code is not especially functional, nor do the symbols play a significant role. It's more about the macros and the objects nowadays.
In case you did not mean "Common Lisp" by "Lisp", try to be more specific next time. The "Lisp" term does not include Scheme, and is not really that much used when talking about Clojure.
What do you mean by "everything-is-a-symbol", anyway?
I think the language debate comes down to two classes of people: (1) people who care passionately about languages and code aesthetics, and (2) people who don't really care. Unfortunately, the second class of people are more unified and comprise at least a plurality, so they win.
The problem with the first camp is that we all want to use different languages-- Haskell, ML, Clojure, Scala. And these are all good languages! But companies can't afford to have more languages than there are people, not as the latter number grows and as being able to read code becomes more important than being able to write it. So every company needs to limit its white list. Those of us who are passionate about good languages nonetheless disagree on which of those good language to use. And we do ourselves a disservice when we bash Python because it sucks compared to Haskell; the result of this disagreement is we end up having to use C++ or Java.
The perversion here is that the reason so many companies coerce their codebases to C++ (i.e. limiting the whitelist) is that they recognize the importance of reading code. Yet they choose languages (C++, Java) where reading average (not bad code, mind you, but 50th-percentile code in a production system) code is extremely difficult. A good C++ programmer can write eventually comprehensible, but not beautiful code; a bad or average one can easily make an illegible and unmaintainable mess. For contrast, a good ML programmer can write beautiful, easy-to-read code while bad ML programmers are practically non-existent because they can't get their code to compile. For bad programmers, ML is that evil little language that caused them to nearly flunk that PL course they had to take in college.
What I wonder is how Yahoo managed to get people of sufficient talent to do the project. It seems like one of those hard-but-shitty projects that never succeeds because the people who are competent enough to do it have better options. Do-the-same-thing rewrites are never popular projects, and most people who understand Lisp are not going to be happy about taking on such a project for the sake of C++.
(No, I'm not saying that there aren't good programmers who use C++. There are great programmers who use C++. That said, someone who understands functional programming well enough to read Lisp is not likely to be among them; people who understand FP want to use it.)
You've done an excellent job of summarizing why Google does what it does: extremely few languages, and all code must pass readability review (or be written by an author with readability for the language(s) used) before it can be committed.
Though reading code isn't just it, language interop everyone knows how to use is important too. While all the fancy JVM languages have Java interop, it's still a huge pain for every old Java developer who wants to call your Clojure library. They'll end up learning how to do it separately for every language, every time, multiplied by the number of developers who aren't fluent in these languages (most).
Note that with JRuby, it is trivial to implement a "normal" java interface so your java consumers do not need to bother with the implementation detail (language) of your jar.
PG: "Using a language for these "certain key things" is exactly what it means to "really use" it."
I thought this was a terrific observation about language wars. You can build a website in Ruby or PHP or Python etc., without really using the language. If your code ports nearly line for line from say, A to B, you're probably not taking advantage of the benefits of the language(s). You are using the language, but not really using it.
A more concrete example:
You can build a website in PHP that also has a chat room written in node.js.
In this case you are really using both languages.
Alternatively, you can build a website in node.js that also has a chat room written in PHP.
In that case you are using both languages, but not really using either.