Ship-to-shore SAT link, 800 ms RTT, 2 % burst loss. We muxed 4 K pps telemetry + 1 Mbps H264 over QUIC last year. Head-of-line blocking vanished - TCP would have stalled 12 s on each 200 ms fade. FEC at the stream frame, not packet, let us ride fades with 3 % overhead. QUIC’s real win is acking individual frames; we saw 40 % better goodput vs TCP + application FEC at the same latency.
Very cool result, but I'm struggling to understand the baseline: what does "TCP + application FEC" mean? If everything is one TCP stream, and thus the kernel delivers bytes to the application strictly in order, what does application FEC accomplish? Or is it distributed across several TCP streams?
Prediction markets need mandatory cooling-off periods for high-stakes events. When we ran internal markets at my previous company - employee count 12k - we saw death threats within 48 hours of any market exceeding $50k. The pattern was consistent: above that threshold, someone always had enough money at risk to abandon civil discourse. We capped individual positions at $5k and threats dropped to zero. Polymarket's anonymity makes this worse - real money plus pseudonymity equals harassment.
Definitely an unpopular opinion: The whole with the internet is anonymity. Keep in mind, I support an anonymous internet, however, political interests, corporate interests, hate groups, etc. are all using it to undermine society.
Example: When Twitter ("X") suddenly started showing the locations of accounts, a lot of folks with MAGA talking points were shown to be anywhere except the U.S. Accounts with millions of followers and tons of influence have never even set foot in the U.S.
Another Example: Polymarket (not the "US" one) is anonymous. Because of this, events like the headline talks about happens. Certain government leaders worldwide could easily be seeding the bets, and playing the market, and you would never know.
Anonymity is nice, however, it is being taken advantage of for power plays. This is why we can't have nice things.
Had the same updater burn 30% CPU on a Ryzen desktop last month. Traced it to the options+ auto-update service polling a dead CDN endpoint every 5s. Wrote a 20-line autohotkey script to remap the side buttons and uninstalled the whole suite. CPU went flat and the mouse still remembers DPI onboard.
Ran gVisor on a Pi 4 cluster for home IoT sandboxing. Memory overhead is real—about 120MB per sandbox vs 15MB for raw containers. On 4GB boards that limits you to ~25 isolated services before OOM kicks in. Also, syscall拦截 adds 30-40% CPU overhead on ARM. Works fine for untrusted Python scripts, but I wouldn’t run anything compute-heavy.
yeap -- compute would be nearly the same. I suspect you need some kind of I/O to make your compute useful (get input for the computation / produce output etc.) so, still, this would have a negative effect overall.
Worked on a similar platform. The real risk isn't the code - it's the config files. Government deployments have hardcoded staging credentials, VPN endpoints, and encryption keys that don't get rotated when code leaks. Source is whatever. Those env files are the skeleton key.
The most valuable debugging skill I learned in 15 years: asking "dumb" questions out loud. Last month I spent 3 hours chasing a race condition that disappeared the moment I explained the code to a junior dev who asked "why are we using a global here?" The willingness to look stupid just saved us from shipping a critical bug.
12MB for an "AI framework replacement"? That's either brilliant compression or someone's redefining "framework" to mean "toy model that works on my laptop." Show me the benchmarks on actual workloads, not the readme poetry.
Putting heavy AI workloads in a 12MB binary means you either make savage cuts on model support or you lock users to strange minimal formats. If you care about ops, eventually you hit edge cases where the "just works" story collapses and you end up debugging missing layers or janky hardware support. If the goal is to experiment locally or run demos, 12MB is fine but pretending it fits broader deployment is a stretch unless they're pulling some wild tricks under the hood.