Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Programming and Scaling (lambda-the-ultimate.org)
115 points by geekam on Aug 6, 2011 | hide | past | favorite | 32 comments


Alan Kay's praise of Bob Barton in this talk inspired me to do some googling, and I ran across an extraordinary thing: a transcript of an oral history conference from 1985 that reunited Barton and the team that built the Burroughs B5000. Here is what Kay says about Barton and the B5000 in his talk: "In history you can find really incredible geniuses like this man, who anticipated so strongly that we can only emulate his ideas in software today. Intel hasn't gotten around to understanding them yet." "The hardware of today is not as advanced as this machine was."

When not only Alan Kay but Chuck Moore as well describe your work as an epochal influence, it must really be something.

In 1985, some wise people got the B5000 team together and recorded them talking about it at length. The transcript exists online, and it is a remarkable document. Among many gems, Barton says: "I just thought of this morning a way of characterizing what I've been doing for 30-some years. I'm an industrial saboteur." Also: "I have never been able to work with devil's advocates". And here is something that is reminiscent of Kay:

"I was interested in small machines. In my view, the machine got too big too soon. All the experience we had from working with very small machines, particularly the 650, only indirectly with 205s, was in the direction of simplicity and operating systems and engineering out as much as possible all the incidental red tape that was very common in those days."

Also, it's noteworthy how frequently [Laughter] annotations appear in the transcript. I've long felt that the best teams are also the most fun ones.

http://special.lib.umn.edu/cbi/oh/pdf.phtml?id=21

Edit: it deserves its own post, so I put it up at http://news.ycombinator.com/item?id=2855508.


Dr. Kay is a genius of the highest order. He not only understands that we're not at the Promise Land, he understands why we're not and gives valid advice on how to get there, including working on it himself.

This is not to say the he doesn't get many things wrong. VPRI's COLA, for example, and the IQ/Knowledge/Outlook portion of this presentation is horribly off-base. I won't go into it here, the margins are too small and it doesn't matter. The main point is that he's still the bright torch in a dark night.


Can you explain more about the knowledge / IQ thing - even briefly? In my experience, I've seen many misguided intelligent people not much helped by it, some even worse off as they use their smarts to muddle through where the terrain should have dissuaded them long ago; think eg over-complicated under-abstracted code. And reading Mensa newsgroups will disabuse you very quickly of the idea IQ alone is even a positive, rather than neutral at best, trait. You need intelligence to make a leap of insight, but knowing that a leap of insight is possible, that any particular problem deserves a novel solution rather than an existing one, requires perspective or foreknowledge, or knowledge of the history of solutions that didn't quite make it. So I'd agree with Kay here; smarts alone are almost useless.

Or to make an analogy: IQ is like strength, but knowing where to push needs perspective (literally a different point of view) or a map (knowledge). Applying that strength to gain perspective instead of just pushing requires some self-awareness, particularly if just pushing even harder always worked for you in the past.


Sure. It's a big subject but here's my take: IQ, first off, does not measure intelligence. Intelligence is qualititative, like beauty, not quantitative like height. While you can be one inch taller than someone else, you can't be one point prettier. You can measure cheekbones in women, and higher cheekbones will on average indicate "more beautiful". You can say, Czech women have higher cheekbones on average than other nationalities and as a country, it tends to produce more super models per capita, and draw some correlation. IQ can be used this way too, but it's measuring aspect associatied with intelligence, not intelligence. Worse, for years there was an understanding that IQ can't be changed per person, leading us to believe we are 'this' smart and can't get smarter. What a fiasco because we now know that it's like fitness in this regard: The more you work at it, the smarter you can become.

Dr. Kay mentions outlook and that does apply. This beauty analogy is important, so let's keep using it. My wife and I were in Monaco yesterday, and two women passed us who had clearly undergone tremendous amounts of plastic surgery; and we agreed that it made them look hideous. If there was a Beauty Quotient (BQ), they would proudly declare themselves to have "a high BQ". It simply doesn't work that way. Take any public figure you find beautiful and look at their feature set. You'll find it's not a simple addition of beauty features that makes them beautiful. It's about the mix of features -- for that time and culture. In other words, it's partly about "character" (what Kay calls "outlook"). But while that's where it ends for beauty, that's only where it starts for intelligence. Because your character, how you define yourself, will block your ability to make the right connections. If you think of yourself as a patriot, you will block information that will run contrary to that belief. If you define yourself as being smart, you won't be able to accept conclusions that might be contraversial and place that image in doubt, and so you will be less intelligent for it.

Dr. Kay calls knowledge important. It is, but not as much as he thinks. If we consider Graph Theory, we can say "knowledge" is like the vertices -- the dots. When learning you group dots and break apart dots: ascern (my word) and discern. Ascern means finding common properties in group of data points: wine and water are both part of the set of liquids. Discern means distinguishing differences: wine can be red or white, and red wine can be from different grapes such as merlot, pinot noir, etc.

"Understanding" is the equivalent of the edges -- the lines connecting the dots (of various abstraction levels). You can't understand what you don't know, so you need knowledge, but that alone won't get you there. In fact, too much knowledge in any particular space can inhibit your ability to draw inferences and truly 'understand' the subject (making relations amount the data points) because you're dealing with information in terms of too small of data points; you're not abstracting high enough. The edges are not as simple as I'm making them out to be. The edges actually say, this applies to that 'usually' and 'in certain cases'. Water belongs to the set of liquids, but only when the temperature is right. So these data points are abstracted via edges, and so, in conditional ways.

Understanding also means knowing at a comfort level that allows you to play with the information at the right abstraction level. Let's say you are air-dropped into New York City. All you know is what you've heard about Central Park and the Empire State Building. So what you do, initially, is to abstract around those landmarks. Where is the really good Thai restaurant? A bit to the South-West of Central Park. But as you learn more about NYC, you get more detailed: the place is next to the Roxy on Lex and 23rd. You're able to talk in terms of greater granualarity: What are the bad areas? At first they will be big swaths of areas of NYC, but later more and more like little bubbles throughout the city. Sorry, that's not a good explanation at all -- I'll try to come up with something better. The main thing is that "understanding" lets you create verticies when you don't have them (via approximation to other edges and vertices), and use unknowns with more ease. If you don't "understand", you will cling more tightly to your landmarks, but when you understand, you don't have to anymore.

We often call understanding about life "wisdom". We don't have a word for understand about cars and microbiology. And it's this understanding combined with character, where true intelligence or genius arises.


I think you've redefined the words intelligence and knowledge to mean things other than what Kay and others use them to mean. If you use a private language, it's easy to win any argument. I don't disagree that there are useful distinctions to be made between facts, concepts and wisdom; but I used the word knowledge to encompass all three.

I strongly disagree with you that "true intelligence" (a true Scotsman?) is a synonym for genius; both words would be hopelessly ambiguous in meaning if that were the case; and I also disagree that understanding is required for intelligence. Rather, some intelligence is a prerequisite for understanding, and to some degree intelligence is a measure of the rate of change of understanding when presented with novel situations - the first derivative, if you will.

If you meet someone who is quick on the uptake, who learns quickly; and contrast them with someone a bit slow; what are you to call that quality of them if not intelligence? This quality is not reliant on character or understanding, but rather the reverse.

And I still think that if you use the words as Kay meant them, you would not disagree with him. All the more so, since you emphasize the synergy between understanding and smarts.


Wait -- no one is trying to win an argument. I must not be communicating my point well, so I apologize.

If you meet someone who is quick on the uptake, who learns quickly; and contrast them with someone a bit slow; what are you to call that quality of them if not intelligence?

I would call that "quick uptake" and "a bit slow." That's it. Einstein learned to talk at 5. So he was stupid? This is a poor way to define intelligence.

When Dr. Kay says IQ, he also means this "innate cleverness." And my response is that if two people lift weights, one has to work at it more than the other, but both can lift the same weight, which one is stronger? Neither. They're equally strong. Mental acuity is very much like physical strength: it is what you make it. So, I say, it makes no sense to define a predisposition to mental acuity, since that is impossible to define, is easily misunderstood and doesn't reflect the end reality. I'm not switching terms on this: it's a useless, and potentially harmful, distinction.

And I still think that if you use the words as Kay meant them, you would not disagree with him.

I think that knowledge is a basis for intelligence but not as important as Dr. Kay asserts. For me, knowledge, in-and-of-itself is only the vertices upon which to build the graph of true understanding. How that graph is built is fascinating and I think deserves more examination, which is why we can't leave it a single term called "outlook," in my opinion. But that's just my opinion; I'm not trying to win points or arguments.

Further, Dr. Kay's goal was not to give a lecture on intelligence, but to get the audience to think harder about what they value. Which is incredibly important and is just one of the reasons he's a hero of mine.


Yep. This talk has made me want to watch and read everything Kay had ever done. He and Steele have laid out the clearest case for inventing new languages that I've heard.

Have you, perchance, heard Steele's "How to Grow a Language" (not too relevant, but also a great talk).



Link to the paper he mentions as one of the best papers ever at the start of his talk for your convenience: http://www.bottomup.co.nz/mirror/Barton-B5000.pdf

The paper describes a CPU architecture. It has a lot of similarity with Chuck Moore's Forth hardware in that it is stack based instead of register based.


Chuck Moore has cited his encounter with that work as a fateful inspiration for Forth.


Also: linking to OMeta (mentioned on the last minutes): http://tinlizzie.org/ometa/

OMeta discussion on HN: http://news.ycombinator.com/item?id=2722730


This was a breath of fresh air. I think someone could do great good to take, say, 15 or 20 seminal papers in computer science over the last 60 years and turn them into a book that teaches history and its discarded ideas.

I still think one of the biggest pieces missing from the puzzle here is the problem of notation. Kay's examples seemed to largely still be stuck in the ASCII paradigm, and I think we need to raise the abstraction so appropriate notation can be used without any restrictions.

To quote Whitehead: "By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race."




What I've seen is fascinating— I wish the video were faster, or available in a format that didn't completely shit the bed every time the stream hiccups. (The "other formats" available are Real and... Real HQ.)

Edit: As I discovered elsewhere, there is MP4 available for individual segments of the talk, just not the whole thing. I'm set.


The LtU post makes it sound quite interesting, but what browser do I need to view the video itself?


It worked with my Chrome on Mac.


I've tried Chrome, Safari, and Firefox on mine. The page loads, and the player loads, but the video does not.


The talk is also provided in other formats:

http://tele-task.de/archive/lecture/overview/5819/


Did you check that link yourself? The only other format for this talk is Real— not all of us consider that an alternative.


There's also MP4 for the four individual sections of the talk (see phone icon).


Ah, I didn't see that. Well, that's much less of a hassle than Real— thanks!


I think his point about the language-oriented approach is interesting. My fear is that the create-an-appropriate-DSL-and-your-problem-takes-little-code approach would actually be disastrous if used by mediocre programmers. Instead of having to read their bad code, we'd be stuck trying to read their bad code in their shitty DSLs.

Object-oriented programming probably wouldn't suck so bad if it had stuck to his vision. Objects are great in some circumstances (the biological metaphor of complexity encapsulated behind a simple interface applies) but they've utterly failed as the default means of abstraction. The default abstractions should be (1) pure functions, and (2) simple and sound mutable state primitives (STM, Agents, etc.)


This objection -- "but bad programmers will make a mess of it" -- is the stock objection everybody makes to every unorthodox programming construct. Since it is an objection to everything, it is an objection to nothing.


Just because people overuse it doesn't mean it isn't true sometimes. I don't know if there's a name for that fallacy, but it definitely is one.


That what isn't true? That bad programmers will make a mess? Of course that's true. But if you can't show how and why it's more true of construct X than in general, you've said nothing. Meanwhile, it sounds like you've made a serious objection to X when you haven't.


This is why I like strong static typing. Instead of bad programmers making a mess of the language, the compiler makes a mess of them.

(I'm half-joking, since there are a variety of bad programmers and not all are of the "can't get code to compile" variety. Actually, the most dangerous bad programmers are the high-IQ bad programmers, but that's another rant. Still, I think it's true that the discipline enforced by languages like ML and Haskell turns a lot of bad programmers away.)


Regarding your first point about mediocre programmers: I find this patronizing. True, DSLs have the potential to obscure your code, but so does every construct that allows abstraction - even the humble subroutines ("If mediocre programmers give names like bongoBong() to a subroutine, then how can we tell what the program does?").


Could not agree more. Or, as said on Five Questions about Language Design[1]: Give the Programmer as Much Control as Possible

[1] http://www.paulgraham.com/langdes.html


Yes yes yes. Further, I think 'safe' processes, languages, and approaches play a strong part in making mediocre programmers.


The big problem with DSLs is that they are not composable. Writing expressions in one DSL within expressions of another is often very tricky, if not impossible.

DSLs also tend to be leaky abstractions. If I need to drop down a layer of abstraction in the middle of an expression of your DSL suddenly I have to understand all the mechanics of your DSL.


Maybe DSL's make more sense in the context of a master/junior programmer setup, where the master programmer creates the DSL and the junior programmers write in it. I don't have enough experience to say with any confidence. Do you think that would work?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: