Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No one likes DDOSes from China. One can plead Amazon as much as one wants. Pay or get booted, there are probably 2 engineers paid 6 figures a year by Amazon getting paged for this DDOS, someone must pay for the time they spend tuning DDOS protection instead of their primary project to make attacked website accessible for everyone else.

Source: worked for AWS, was oncall during similar attacks. Nasty things with those they tend to start around 6-7PM (guess when does working day start in China).



You mean more than I already am paying them?

The two colo centers we've hosted in have always helped us with DDOS issues free of charge. Maybe that's not normal, but even a former employee telling us to GTFO looks bad on Amazon to me.


I am not telling anyone to GTFO, nor, I believe, Amazon does. It depends on a DDOS, there are a lot of smaller-scale DDOSes are just absorbed. Some are stupid filtered easy enough that no one is notified, some are serious enough. My first oncall at Amazon I got ddosed from 3 VPS machines, easy enough, a month after same attacker started to shift machines inside VPS, then a month after attacker started to spoof ips within narrow range of ips, in just a half a year (yes! they can last THAT long) attack was coming from a range of spoofed IPs with a traffic that followed no pattern except for destination they wanted to go down at many gigabits per second.

In this case - 700k QPS (gigabits of ingress) of well engineered HTTP/HTTPS DDOS traffic is not something an average colo can or even will be willing to handle at all. I'm assuming a hot-potato DDOS, when a customer comes along with a long tail of colos and providers that already booted him. All that traffic, servers and ultra expensive engineer time. Everyone wants it for free, but ALAS.


If you start talking about co-located or self hosted services, the mitigation strategies are very different.

Assuming you can find yourself a transit provider that supports BGP flowspec updates (many don't, sadly), you can do this fairly cheaply. You'd obviously want some level of support from a network tech that knew what they were doing, but it's not insurmountable. There's a bunch of other options available too.

This sort of thing is one of the downwsides of having your infrastructure managed by someone else. If things go wrong and your provider doesn't feel incentivised enough to help you out, there's a lot less you can do about it, other than just pay whatever sum they demand.


I have no much experience with co-located services so can't really comment on that. I can't go in detail how and what mitigations are applied on AWS site also, as I feel obliged to leave as much weapons on a "good" side of ddos as possible, and knowledge is one of those.

What I remember pushing BGP flowspec updates upstream was thought about as something close to impossible though.


Interesting when your product can be spiked, and make significant increases in profit. This looks like numbers that could potentially knock a business out of business. Reminds me of old phone bills.


If product revenue doesn't grow faster or even along with traffic (expenses) it will eventually knock itself out of business one way or another.


Turning sustained DDoS attacks into revenue sounds like an intriguing business schemes.


Also usually AWS doesn't turn attack into revenue, they push customer up the "support tier" (gold/platinum whatever they are called now) and strip the DDOS traffic from expenses as much as possible. Those tiers are quite expensive though, but are fixed support costs more or less.

My general point is: AWS is a business, and it operates as one. There are no hollywood style bad guys sitting there in cubicle dungeons on chests filled with gold thinking how to extract money, quite the contrary. It is understandable that customer cannot pay unlimited (from customers perspective) charges, but AWS pretty much incurs them, as customer being ddosed is consuming resources that would be otherwise be sold to others, or engineer time that would be put into developing new features and attracting new customers.


What do you think. Sustained DDoS attack must at least generate enough revenue to cover sustained expenses if they are incurred or no?


That simply isn't reasonable. Name one business other than maybe network providers who's revenues grow in direct proportion to incoming packets, regardless of content?

You can't disregard any business that doesn't fulfil that property as being "eventually unsustainable".


My comment was a bit more general than pure "packet". I agree thats where the disconnect between low lever service provider and customers come - providers revenues and expenses are "packets", while they don't always translate to revenue for customers.

However my note was about general "traffic", if one sells video views for example, and revenues do not grow inline with adjusted to [almost always decreasing] bandwidth costs sooner or later that will become a big problem.


I was at a product management event once and met a guy who managed a product in this space. A group of us went out for drinks after the event and he ended up explaining what he did. At some point he mentioned "Chinese hackers." Another guy in the group called him on it, wondering why he just assumed it was Chinese. He laughed and said that the near constant level of activity they see goes basically flat on Chinese New Year.

I suppose if you're not a Chinese hacker, it might pay to pretend you are by tailoring your working hours and days.


Yep it is THAT obvious...


Why don't providers just set up a system that creates a country-level null route for a given destination IP? And have a UI with a checkbox for the user to do it, for any selected country. It would mitigate the issue, and once it's over, the user can un-restrict traffic / or just keep blocking if it's a non-valuable source.

I know you can do this on the server, using many different techniques. But this does not help as the traffic still reaches you (that you have to pay for).

You can also do this with Geo DNS (and get much less of a bill).

And the ISPs, datacenters, and anyone with a router can block ASIA or China allocated IP ranges. Especially if it's not the type of a flood that's designed to attack the routers (instead of the web-server).

So what's stopping Amazon?


The point of their website is to make censored content available to Chinese users.

China is attacking them to prevent Chinese people from reading the website.

Your suggestion is to make the site unavailable to China.

Do you see why it is not a solution? You are basically setting up a market for censorship-- the attack doesnt ever have to end-- depending on how much China is willing to pay to keep the website offline.


OTOH if the great firewall already blocks this site, wouldn't that mean normal Chinese citizens would access it through a VPN via another country?


The great firewall does already block this site, and I cannot view it now without turning on my VPN. Therefore this site is pretty useless to me currently, but someone has to fight the censorship. Maybe one day it will actually succeed?


If normal citizens had access to a VPN in which to access this site from another country, it would be quite redundant to use this site then wouldn't it? Maybe I'm not understanding.


If normal citizens have access to it without a VPN, then why doesn't China just block it with the firewall instead of ddosing?

Maybe I'm not understanding.


Yeah, no -- my fault. I wrongly assumed it was some type of proxy or way to get around the great firewall.


Their website alone can't help those Chinese users get rid of GFW, but can provide information to those who already can.

Attacking a single site is like more pro-active movement of the GFW itself, and a sign it can be so much worse.


Please, don't perceive this as being rude, it's not meant to be.

Having provided IP transit at a largish network provider in a previous life, you have no idea at the complexity involved what you're asking for. It could be done, but the costs involved are non-trivial.

If you're honestly interested in the complexity involved, start reading about BGP, dynamic routing protocols, router/switch fabrics, control plane integration, autonomous systems, peering agreements, etc.


I feel it shouldn't be unreasonable to expect AWS/Cloudflare/Akamai to have policy-based routing to blackhole a lot of these source subnets. Of course it's complex, but these are some of the largest hosting providers in the world.


I've found this is a common thing to say with AWS employees. One of them insisted that Amazon's ridiculous ephemeral storage policy (immediate, permanent, and irrevocable deletion on any halt or stop event, making accidental data loss a real possibility) had to be that way because it would just take too much hardware to allow a cooldown period before the drives were wiped. There's no way I believe that. I think Amazon is just used to intimidating customers with exactly that line of reasoning: "No offense, but you have no idea how hard the cloud is", and people buy it because "the cloud" is the new hotness.


I've had RAID 6 fail. It should be extremely rare, but isn't. And at AWS's scale, It's not hard to imagine servers going offline regularly. Ephemeral storage as a policy makes sense to me in the sense that you can separate out that what's important from that which is ephemeral, and provide cheaper storage than a more HA solution like ganeti.


Why isn't the persistent data put onto an EBS volume?

If ephemeral storage is a drive local to the virtual machine's host (which I think is the case) then having a cool down period would mean holding the hardware that used to be used by you in reserve until the grace period expired.

It's possible but it's a lot of work to solve a problem that can be better solved by not relying on ephemeral storage persisting.


I mean, it's a nice idea in theory, but in practice stuff finds its way onto the ephemeral disk even if you have EBS volumes mounted, and "Sorry, we just deleted all your crap, I guess you should've had that on EBS" (which is an extra fee by the way) is not an acceptable solution to the problem.

Yes, it would mean holding the hardware in reserve for the cooldown period. I'm not talking months here, just enough time to recover from an accidental "sudo shutdown -h now" instead of "sudo shutdown -r now" (or similar). It'd be nice if Amazon sent an email warning about the condition and giving you an hour or so to go in and save your data/restart your instance. They could even make it a policy that you're charged for the time your instance is running + 1 hour to facilitate cooldown feature if they're really that worried about it; it's better than wiping data as soon as someone stops (from AWS console) or shuts down (from real console) an instance and providing absolutely no avenue for recovery, no matter how quickly you notice the mistake.


I have never had stuff accidentally find its way onto ephemeral storage. The ephemeral storage is mounted at a specific location of /mnt. Everything else on the system (OS, binaries, application code and resources) is stored on an EBS volume.

You have to specifically put something into the /mnt folder if you want it to be stored on the ephemeral storage. Any other location is safe and will persist through halts and stops.

In practice the only thing you should ever use the /mnt folder is maybe a Nginx disk cache, or as an alternative /tmp or something like that. Basically if stuff you don't want to lose is finding its way onto the ephemeral storage then you are doing something wrong.


It depends on your users. We have some users that are not super well-versed in AWS and they just see a big disk and put data there, and someone has to come back and move it to an EBS volume to make sure it's safe. /mnt is also used as a staging area for large files and the intention is always to move them to permanent storage when done, but that sometimes doesn't happen. /mnt is usually, in the non-AWS world, where the bigger, more authoritative disks, like an NFS mount to the NAS, would be mounted, so it's counter-intuitive to tell users to treat /mnt like /tmp. Even if someone is using /mnt as a temporary store because they understand EBS v. ephemeral, if they shut down from within the instance, they don't see any warning about the doom of the ephemeral data, and it may be unclear that a shutdown/system halt is the same as a "stop" in the AWS console, and they could lose the data that they had in the staging area unexpectedly.

There are plenty of plausible situations where an AWS user can find themselves with important, even just temporarily important, data on ephemeral. Whether those are the result of "correct" usage or not, it's beyond the pale to just zap that data away and tell the customer tough titties as soon as a shutdown command is issued.


if you need persistant storage, use EBS. Ephemeral is self-defined.


I've been looking for work for a while, but I won't even respond to solicitations or job board posts that so much as mention the cloud, agile or scrum.


Rather than modding me down, perhaps you could explain why you disagree.


What you said adds pretty much no value to the conversation, I didn't mod you down, and I didn't look to see precisely what field you work in, just the same, cloud is pretty much an obfuscation for on-demand co-location and shared-hosting services.

While you may not like the trend, and some providers have better options than others, and you may require some operations over others... but very few businesses can afford to manage multi-site infrastructure that can dynamically scale. Most are probably served just fine with rented servers, traditional colocation, or don't even need more than one VPS...

That doesn't make the technology bad, and only makes you seem ignorant in your prior statement.


I admit I could have explained why.

I'll go into it later, but for the most part I consider the cloud a really bad idea. I also regard most cloud companies as "buzzword-enabled" so as to attract investors.

My gripe with scrum and agile is not so much with the methodologies, but with companies that think they have a methodology when in reality they have a bureaucracy.


>Cloudflare

You mean the company that provides caching services for pay DDOS services?


Geo DNS (to be more precise AS numbers is what is used) is something that is sometimes used as a very last resort - customers in China are customers as well, and if you drop everything from China thats effectively what the attacker wanted.

I recall only hearing about one time when packets from Chinese ISPs were completely dropped for some reason and only for short period.

I've also have an anecdotal reference that one can persuade providers that actually deliver traffic from China to do filtering on ingress on next hop routers after China, but that should be something very serious, that impacts their revenue as well and prolonged. As another commenter noted - costs for providers are very non-trivial.

In my experience DDOS is always money competition, it costs money to mount one, it costs money to defend against one. Unfortunately when one of the sides is [allegedly] a country it doesn't play very well.


I think this would not be intended. The page is DDoSed from China and supposed to be reachable from China at the same time.


the real reason is because Amazon wants to do business in China, so they absolutely cannot do something like that on their end without getting blacklisted by China's government.


DDoS come from hosts which are part of botnets, usually compromised hosts, from all over the world. Not one country or subnet.


Why not redirect to a CAPTCHA to prove that the user is not a BOT?


So now you DDoS the captcha system. For companies not operating with massive bandwidth and computing power, you can just overwhelm their defenses. Cloudflare can get away with it, because they explicitly set out to be able to "service" those super huge number of requests.

I was working on an anti DDoS system for SIP, a UDP-based protocol. Basically the options were: 1. lockdown, just whitelist known good customers, and break many scenarios. 2. Attempt some kind of analysis, like sending out probes to determine good/bad IPs. 3. Scale the hell up. Write L7 stuff that can go at wire speed, and get lots of wires.

Needless to say, #1 is the easiest to implement, but allows you to get your pipe saturated. #2 requires compute + pipe, and #3 is the only thing that'll really work.

This matters because DDoS'ing a telecom can be very lucrative. I can say with good confidence that demonstrating DDoS capabilities are probably worth 5-6 digits in blackmail against many companies.


Good luck DDoSing ReCAPTCHA, I'll wait


Greatfire is unique that they want the site to remain accessible to ordinary Chinese users while withstanding the DDoS attack (so they can't blackhole all traffic from China either).

If they put a reCAPTCHA wall in front, the GFW can simply block reCAPTCHA (easy -- it is a Google property and they block everything else from Google anyway) and no one from China can access Greatfire without a VPN. Mission accomplished.


Assuming the attacker is indeed China:

If the goal was only to block Greatfire for non-VPN users, then they could just use the GFW for that from the start. The use of a DDoS can only imply that China wants the site offline for everyone, even VPN users.


I think Greatfire is evading the GFW by hiding their mirrored content behind innocent looking websites such that the GFW does not block it. Once the censors discovers a Greatfire node, they block it, but then Greatfire just moves on to another IP address or domain name.

With this DDoS, they are taking the different route of attacking the infrastructure of Greatfire such that they can't serve traffic from China at all. Causing massive bills and outages for Greatfire is probably a bonus, but I don't think that is their main intention.


I chuckled, because when everyone tells me "AWS is practically the internet" I can point out "The Internet is resilient at a far lower cost than Amazon".


Internet as a whole yes, making a single attacked web service resilient to DDOSes at a low cost is quite a challenge.


I used to work at AWS too. Where were you? Seattle / support?


Software engineer, I am pretty easy trackable in internets too :]


I'm pretty sure they don't make 30k a day though.


In my experience Amazon will most probably write off their bill if they refuse to pay, but will refuse further service as well.


How do we know people inside China are responsible?


People? No. IP/AS numbers. Where traffic lands first, on which POPs... Its pretty obvious.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: