Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm worried, not because of the standard itself, which seems well thought out, even if rushed.

I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.

Also, note that the exact problem that HTTP/3 tries to solve was known in the design process of HTTP/2 and some people even noted having multiple flow control schemes at multiple layers would become a problem. We are letting the same people design the next layer, and probably too fast in the name of time to market.

This should definitely live in a way people can make use of it easily, with an API highly amenable to binding. If it gains traction, we need a new UDP interface to the kernel as well, for batching packets back and forth. This kills operating system diversity as well, or runs the risk at doing so.

OTOH, I see the lure: SCTP never caught on for a reason, and much of this is the opposite of my above worries.



> some people even noted having multiple flow control schemes at multiple layers would become a problem

It could, but it didn't in reality. HTTP/2 has two levels of flow control, stream-level and connection-level. You use 1 connection per site and as many streams as you want multiplexed inside that connection, thus stream-level flow control is necessary to avoid stream head-of-line blocking.

The actual layering violation is connection-level flow control, which seems to duplicate TCP flow control, but it's not mandatory, as you can see most if not all open source implementations simply set a very large connection-level window size to hand off flow control at this level to TCP.

There is a good reason for this to exist, which is to compete for bandwidth with HTTP/1.1 domain sharding technique which uses N connections per "site", effectively having N times the Initial Congestion Window (IW) than what HTTP/2 can have in one connection. IW was a huge issue in improving connection startup latency, and after managing to convince Linux netdev to raise it to 10, Google couldn't get them to allow applications to customize its value anymore. The only solution for Google was to add some flow control information in HTTP/2 and pass it to coupled TCP flow control to improve IW. So in reality only one flow control scheme is working at any time, instead of the common perception of TCP over TCP meltdown. For anyone else they can simply not do connection-level flow control in HTTP/2 and nothing of value is lost.


The TCP state machine sucks and all of its timing parameters are outdated and unsuitable for modern networks. QUIC frees us from the tyranny of the kernel. Being in userspace is a feature.


The rallying cry of everybody who later comes to the realization that they have re-implemented TCP.


So what if we use our experiences and in-depth knowledge of a past protocol, take into account the flaws, and build something better? You say "re-implemented TCP" as if it's the only possible way to build a reliable packet protocol, and that it has no flaws, and we can't make any improvements to it.

TCP isn't alien technology we don't understand. We do understand it, and its limits, and its constraints, and that means we can build a better one next time.


The problem with coming “to the realization that they have re-implemented TCP” is that it was ad-hoc. In this situation, the re-implementation was done by people very familiar with TCP, both its strengths, weaknesses, and assumptions, who very deliberately set out to “re-implement” TCP to work better with how are networks actually are configured.


Maybe re-implemented SCTP, but this time it's usable.


How does it work with a debugger? With TCP the connection didn't die just because you paused the program. But when everything is in userspace then that can't happen anymore?


We aren’t talking about raw networking; generally a QUIC implementation uses the kernel UDP stack, which buffers packets until read.


No but TCP sends keep-alive packets or something in the kernel right? If you can't send anything by the timeout then the connections should drop?


Only if you set the keepalive option, such as:

  int yes = 1;
  setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, &yes, sizeof(yes));
Most servers (Xserver, imapd, sshd) set it.


Vanilla TCP does not do this. Sockets remain established forever in the absence of traffic.


This is a core feature in TCP/IP. Only the endpoints actually involved in the connection care about what a "connection" really is. If they share a connection, it should be nobody else's business that they do.


This is definitely not true in this world which is filled with NATs everywhere. The intermediate routers very much care and must care about what connections exist.


> UDP stack, which buffers packets until read

Not really. Or just up to a point only. Then it will drop them into the bit bucket without telling either the sender or receiver. With TCP the sender will "find out" eventually the receiver isn't getting the data.

The point is that with streams on top of UDP all that has to happen in the application layer.


Until someone writes it into the kernel at which point that won't be a feature anymore.


Except you don't have to use it if you don't like the kernel option, which a normal application can't do for TCP.


That's true, I didn't think of it like that.


well HTTP/2 makes a lot of timing assumptions as well. just that it happens in user space.


> Being in userspace is a feature.

This.


Would we be better served by Google reimplementing TCP on top of UDP, or by fixing TCP in Android, and on their servers, and telling us how they did it?

If it's better for TCP to be handled in userspace, fine -- they should build the APIs for that on the OSes they control; and agitate for it in the OSes they don't.

And, maybe, just maybe, they could turn on path MTU blackhole detection, please please please please please; it's only been in the Linux kernel for all versions of Android, but not turned on.


> I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.

This can be partially mitigated in the same way it has been worked-around before: Through proxies. The fact that HTTP/3 is still only HTTP makes it even easier.

E.g. on the server side it might be good enough to have an API gateway, load balancer or CDN which understands HTTP/3 and forwards things in boring HTTP/1.1 to internal services. That's not very different from terminating TLS somewhere before the actual service implementation. Actually service implementations don't even have to speak HTTP - they can also talk via stdin/out to a HTTP/3 server in another language - which means back to CGI.

On the client side, we could deploy a client-side proxy server which translates localhost HTTP/1.1 requests into remote HTTP/3 requests. If that thing is part of the OS distribution, it's actually not that much different from a TCP/IP stack which is delivered as part of the kernel. However if it's not part of the OS it might cause some trust issues. And apart from that it might be a bit inconvenient for users, since now applications need to be changed to make use of the proxy.


> This can be partially mitigated in the same way it has been worked-around before: Through proxies. The fact that HTTP/3 is still only HTTP makes it even easier.

But if we’re doing that we get none of the so called benefits Google-HTTP 2.0 and Google-HTTP 3.0 brings, so what’s the point of using them in the first place?

That’s completely ignoring Google-HTTP 4.0, 5.0 and 6.0 probably coming next year, and the issue of when Google thinks it is “reasonable” to break compatibility with the real HTTP, ie HTTP 1.1.


You still get some of the benefits for the connection to the client, assuming your use case fits. Many typical small setups serve static resources through the "proxy" (i.e. nginx for static assets and distributing requests to backends), benefiting there almost automatically. Similarly CDNs, which nowadays are used even by tiny projects.

(also, if you want your concerns to be taken seriously, I'd tone it down a bit. "so called benefits", "Google HTTP", and "probably coming next year" when QUIC has been in development and testing for over 5 years all don't really give the impression you actually care about the details)


> Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.

If you think that's bad, try building a browser from scratch these days!

Then, make it adhere 100% to the HTML5 and CSS3 specs! (W3C versions; I know WHATWG has uses living docs.)


> SCTP never caught on for a reason,

What is that reason, exactly?

I know why I never use it. The use cases where it really shines aren't that common, and it's a very heavy, telco-style, protocol.

However those things are (mostly) true for QUIC as well.


From the article:

Why not SCTP-over-UDP

SCTP is a reliable transport protocol with streams, and for WebRTC there are even existing implementations using it over UDP.

This was not deemed good enough as a QUIC alternative due to several reasons, including:

    SCTP does not fix the head-of-line-blocking problem for streams
    SCTP requires the number of streams to be decided at connection setup
    SCTP does not have a solid TLS/security story
    SCTP has a 4-way handshake, QUIC offers 0-RTT
    QUIC is a bytestream like TCP, SCTP is message-based
    QUIC connections can migrate between IP addresses but SCTP cannot


SCTP does not work through NAT and Windows does not support it.


SCTP has port numbers, so NAT is not a problem. But because of the second point, why should someone implement it? (iptables can do it of course)


I'm assuming the parent meant that NAT, as implemented in SOHO routers, does not support SCTP. They could implement it, but don't, and thus, NAT breaks it.

> But because of the second point, why should someone implement it?

I'm reading this as "SCTP has ports, why should someone implement it?" There is way more to SCTP than ports. For example, SCPT can deliver data on multiple independent streams, something HTTP/2 in many ways reinvents.


Ah sorry. Second point was Windows.


Windows


> If it gains traction, we need a new UDP interface to the kernel as well, for batching packets back and forth.

Those already exist.

http://man7.org/linux/man-pages/man2/sendmmsg.2.html http://man7.org/linux/man-pages/man2/recvmmsg.2.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: