What does Google's involvement in advertising have to do with the design of the SPDY protocol? Can you make a substantive criticism of SPDY based on Google's advertising incentives, or is this just innuendo?
Soley on their advertising incentives: no. It's a rhetorical flourish.
That said, I _do_ think it's extra important to pay attention to what Google does, for two reasons:
1. They're the largest entity on the Internet. This means their incentives are different than smaller players.
2. They _are_ an advertising company. Advertising companies make money by showing ads. They make more money by showing targeted ads. You target ads by collecting data on people.
I think people often forget Google's purpose in the world, and are simply dazzled by 'whoah cool stuff.' I appreciate some of Google's more interesting and ambitious initiatives, but get very scared when people start accepting any entity's actions without question. Specifically when that entity has large financial incentive to collect data about people.
There are, of course, many technical criticisms of SPDY, but none that rely specifically on the advertising angle.
Personally, I find this line of argumentation nothing more than pure ad hominem. This is an open specification, open to review by everyone. If there were technological changes made to somehow support better data collection, people would be able to see that.
The IETF and the W3C has always had large companies involved in specs, often with their own agendas, I see no reason to attack Google in this way.
An ad hominem would mean that I am saying they're wrong. My argument is not
Google is an advertising company, therefore SPDY is bad.
My argument is
Google is an advertising company, and the largest
single entity on the Internet, and therefore, their
actions deserve a healthy dose of skepticism. I'm
not sure that we've been giving them enough skepticism.
SPDY does have good points, and bad points. I just saw a lot of chatter from people who want to ignore the bad points simply because of where the spec came from.
Google also handles more traffic then almost any company on the planet, so their involvement in a discussion about transporting bits is, well, not really that shocking.
The issue is that you're throwing around them being an advertising company as a negative without any discernible proof that it has negatively affected the outcome.
What, specifically (and please spare us the 'rhetorical flourishes') has been proposed that is unfairly biased towards advertising? Which parts should we be skeptical about?
It's not advertising that I'm skeptical of w/r/t Google. It's their amount of capital.
E.g., relating to the ASCII/binary discussion above:
A binary web would require advanced retooling and therefore investment. Smaller business entities are not in such a strong position to deal with such a large shift in their workflow. Therefore, switching to a binary protocol would disadvantage entities smaller than google.
Sure, but bigger and smaller scale operations have different needs. The internet isn't supposed to be about what's best for the big guys, it's supposed to be about what's best for humanity.
The "specifics" in the other threads are more ad hominem. You're saying "we should be wary of what Google does" without actually mentioning what's there in the spec to be wary about. You're saying "we shouldn't trust Google to pass specs unchecked", people are saying "but we aren't: we've read the spec, and it's good", and you're saying "yeah, but we shouldn't trust Google".
Sure, the general trend is a good point, just not entirely relevant to this specific thread. Your comment above does make good points about the actual spec, I agree.
As I mentioned elsewhere in the thread, an advertising company has incentives to collect and process as much data about individuals as possible. This is Google's core competency.
Oh, and in Google's case specifically, they now control the largest web site, one of the largest web browsers, (with this) the protocol it talks to, and they're attempting to supersede JavaScript too... and one of the largest client-side frameworks. The list goes on and on. The largest mobile phone OS. Email provider. (?) Working on social network...
> There is more to the world than a technical draft, you can't just abstract away the rest of everything else.
Then maybe you should actually bring these things up. You seem to be playing coy and throwing around innuendo. If you have real, substantive concerns about something, I think the conversation would be greatly improved by actually bringing those up rather than just casting aspersions about Google.
There are a large number of Google employees and stock holders on HN, and it would not be at all surprising if they leapt to Google's defense for non-technical reasons.
It would be enlightening to see the defenders of Google disclosing whether they have any financial or other interest in Google.
To complement that disclosure, any attackers of Google should likewise disclose whether they have any financial or other interests in Google's competitors.
Sure, but I don't find the social critique very useful either.
There are things that Google does, as a corporation, that you can find fault with, technological decisions that may or may not have been influenced by business model. For those, fire away. But the attack on SPDY/HTTP/2.0 because "Google is an advertising company" (which if you actually worked at Google, and knew how people made decisions here, you'd know is ridiculous from an intent or motivation point of view) is just pure mudslinging.
Examples of stuff that I, as a Google employee, would criticize Google for: Real Names, building "siloed" services and moving away federated/decentralized approaches (see my essay here: http://timepedia.blogspot.com/2008/05/decentralizing-web.htm...), most of what Yegge said about APIs, Google Hangouts going "silo" and away from XMPP model, etc.
People who work on ads and take their marching orders from ads are a small portion of employees at Google. The guys working on Chromium/Blink/SPDY do not report to ads, do not take orders from ads, and in general, work on technology without reference to monetization strategy. Their day to day job is to improve technology, with the hope that if you raise the tide, all boats will be lifted, and they'll be some ROI from that.
But that the idea that engineers are taking marching orders from shareholders to maximize profits based on ads by tweaking web standards is hilariously wrong for people working on Chrome.
I'm not talking about "Larry And Sergey have decreed that Evil Shall Happen!" I'm talking about broad economic incentives. Since I don't work for Google, I have to treat them as a black box; I see what goes in, I see what comes out. I know nothing of the internals, I only have one friend who actually works there. If I implied there was some kind of conspiracy, that is my fault. You're right that that would be ridiculous.
I would also criticize Google for your reasons, and they may be even more important. But this isn't a thread about those things.
A reasonable argument would say that we don't need the social and political stuff in standards discussions, which should be based instead on engineering.
Absolutely. This is why I wouldn't make these comments on the IETF mailing list. I do think that HN is an appropriate venue, this is very much a social place.
"Should we be doing this?" and "How should we do this?" are two very different questions.
I think it's fair to argue that a company's purpose is best illustrated by its revenue streams. Reasonable people can disagree depending on the circumstances.
Github's revenue stream is through private repositories (both hosted on github.com and self-hosted enterprise), but I don't think you could reasonably assert that Github's purpose is to make a profit off of keeping code private. Their actions, in fact, suggest precisely the opposite.
In some cases, a company could transcend its initial purpose, but still keep it around as a/the revenue stream as a means to the new end. Not many/any new and further out there Google initiatives have made it to wide scale public adoption, so it's yet unclear whether Google would be such a company, but it could very well turn out to be one.
If it's a publicly traded company, it has a fiduciary responsibility to make money for its investors; so I'd have to agree with you. It's purpose is to make money. It might spend money to buy goodwill to earn loyalty, but at the end of the business day, its a business.
Google's corporate charter was specifically written to avoid that. And shareholders have no meaningful voting rights, so they can't override it there either.
It doesn't have to be "codified" to be fiduciary. The trust relationship between any investor and the investment enterprise is that the enterprise will be able to generate a return on the investment. If it doesn't assume this, it generally will be deemed a non-profit.
If it's not codified then it's more likely an expectation than a responsibility. Of course investors expect a return, that's what the term "investor" entails.
Non-pecuniary returns can satisfy the responsibilities of an enterprise.
It appeared that a legal obligation was being suggested. What sort of obligation was being suggested and how is that obligation derived and enforced?
In brief, he makes three points. The first is that SPDY/HTTP 2.0 doesn't do anything about the widely lamented lack of session handling. The second is that it doesn't contain any simplifications of HTTP, despite there being several examples of things that could be simplified (header parsing, for instance, is hairier than it could be). The third is that it is going to pose problems for proxies.
I don't know how many of these points continue to apply with this HTTP 2.0 draft, nor do I have any skin in this game, but I respect PHK quite a bit so his outrage creates in me a sense of mild reservation. :)
I too have unreserved respect for PHK as an implementor. I'm not sure I find his critique compelling. It seems to me that it distills to a couple simple points:
* SPDY depends on Deflate compression, and will require middleboxes to implement deflate to route requests. I think the "IETF school of design" has an irrational fear of good compression and I think it's harmed other protocols, most notably DNS. I may be poisoned into this viewpoint by Bernstein.
* There are protocol constants that PHK doesn't know the background of, which strikes me as the kind of documentation bug that something like an HTTP 2.0 would address.
* SPDY might have required another WKP, which isn't really a SPDY problem.
* There's DoS potential in SPDY --- but of course, there's DoS potential in HTTP too; look at chunked encoding, for instance. For that matter, modern HTTP 1.1 also accommodates compression; when it comes to attack surface, in for a penny, in for a pound.
* A similar argument addresses PHK's concerns about the (theoretic) security of the push model, which is also something that modern HTTP accommodates.
Later:
Oh oh also: PHK sees HTTP 2.0 as an opportunity to correct the session management problem, which has led to the "bass ackwards" design of heavyweight signed cookies in web applications. I sympathize with him on this point, but it's not HTTP's fault that this happens. HTTP 1.1 cookies also used to be simple opaque session IDs; heavyweight signed cookies are a consequence of server app architecture, not the underlying protocol.
Even if HTTP 2.0 had built-in robust session management, Rails apps would still be shoving several kbytes of encrypted state out to web browsers.
The first two criticisms of SPDY sound like "doesn't solve every known problem with HTTP at once", which was never a design goal; that doesn't make SPDY bad, it just means that further room for improvement still exists.
The third criticism, that SPDY makes life more difficult for routers, makes me wonder: would this get easier if SPDY just said "forget the Host header, SPDY requires SNI"? Seems like that would help.
My main objection is that the name you call something does matter. SPDY is a very different protocol from HTTP, which addresses a very particular set of concerns. It diverges quite a bit from the "intent" of HTTP. This is all fine and good until you change the name from SPDY to HTTP 2.0. One expects 2.0 of something to continue the same philosophy and motivation that produced 1.0. When that doesn't happen (R6RS is another good example) you can expect some pushback. In this particular case, the "label swap" nature of the process is generating animosity from those who feel that the process has been co-opted by people trying to pull a fast one. I don't think SPDY is intrinsically wrong, I just don't think it looks like a natural successor to HTTP. I wouldn't expect HTTP 2.0 to address every known problem with HTTP at once, but I don't think it's unreasonable to expect at least a few aesthetic improvements.
I don't see how this follows from your earlier objections. "It doesn't add session handling and it doesn't simplify header parsing, therefore it diverges from the intent of HTTP" seems like a non sequitur.
Don't confuse my objections with PHK's objections. There may be good technical answers to his objections; Thomas replied to them above quite cogently, but in any event, PHK's opinion carries a lot more weight than mine. I'm just a spectator.
My objection (observation, really) is that one expects protocol 2.0 to do more than address performance optimization. Simplifying the protocol is a good thing to do with a major revision; they didn't do that. Making the protocol more friendly for upper layer users is another good thing to do with a major revision; they didn't do that either. Instead they took an obviously different protocol designed to address a handful of extremely technical performance matters and rubber-stamped it as HTTP 2.0. Whether you like SPDY or not, it should be clear that this kind of "process" is going to leave people feeling disenfranchised. The spirit of HTTP, inasmuch as such a thing exists, is one of simplicity. SPDY just doesn't "smell" like the successor.
I think the comparison to R6RS is very appropriate to my point. R6RS was designed to address well-known shortcomings of Scheme. The process it took to get approved circumvented a lot of the community. A large segment of the community responded to this by essentially whining about it and ignoring it. We already see the whining about HTTP 2.0. I predict it will be followed by ignoring it, and some years in the future, an HTTP 2.1 or 3.0 that more closely resembles HTTP 1.1.
My sibling has already pointed out one of the better critiques I've seen. There is also http://www.guypo.com/technical/not-as-spdy-as-you-thought/ , which I believe has been discussed on HN before, but I'm on a pomodoro break, so I'm trying to keep this short.
One critique that I don't remember if is contained in either of these two is header compression. Header compression seems to make sense, as compression is good. The problem is that intermediaries make routing decisions based on the headers, and so it's quite possible that the CPU time needed to decompress, possibly modify, and recompress the headers outweighs any gains that the compression brought in the first place.
I've also seen some vague commentary about 'mixing application concerns into the transport layer' which I find compelling, but I don't have enough experience with the low-level networking to properly judge on my own.
Worst of all is that compression is stateful, you need to capture the whole HTTP/2.0 session to be able to reconstruct any information with mandatory HTTP/2.0 debug tools.
> their incentives are different than smaller players
Yes. They are not representing smaller players, i.e. majority. And I think for smaller players speed is not as important as convenience. So this can even hurt smaller players in a long run.
You don't think Google receives enough skepticism? Every time they brew a pot of coffee, somebody out there declares that Google has violated their "Don't be evil" motto and is out to destroy us all with their dark caffeinated schemes. I can think of very few companies that are treated with more skepticism than Google.
Google is the industry's most active and effective corporate advocate for TLS. They're one of the key drivers for certificate pinning and one of the earliest mainstream deployers of forward secrecy. So I think that argument is a little bogus.
I don't understand the first point, though. Could you clarify?
QUIC is a very new, experimental protocol that runs on UDP.
Their (relevant) basis is that TCP's algorithms are completely controlled by the OSes and the routers and all. Using UDP, QUIC can quickly deploy new algorithms without requiring a major part of the world's infrastructure changed.
Google is the industry's most active and effective corporate advocate for TLS simply because it makes tracking users and selling targeted advertising a whole lot easier. Their involvement in the whole PRISM affair has undoubtedly demonstrated that privacy is none of their concern.
Years ago, in days of old, when magic filled the air, I wrote a Slashdot troll post generator. It eventually produced some pretty hilarious posts, but I never closed the loop by allowing it to post. It would make a fun project for learning a new language; perhaps I'll install Dart and give it a shot.
With SPDY as implemented all requests for google analytics reuse the same TCP connection. This connection acts as an implicit tracking cookie uniquely identifying your browsing session.