Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have run a large distributed app that was somewhat microservice based, but also had a large central sharded set of postgres tables. About 10% of the total CPU usage was in the postgres servers, the rest was in the frontends, and they handled many hundreds of requests per second per instance, maintaining a pool of open connections to Postgres. The mulciplicative effect meant that each postgres server would have had thousands of connections open to it - Or we could use pgpooler and have only dozens of actual connections, with pgpooler doing lightweight pooling between the many frontend instances that might be using just one or might have a few dozen requests in flight. Load on each frontend to each database was pretty random - Bell curved, but the highs and lows were enough that the N frontends we needed times the M max concurrent requests that a frontend was likely to have was way outside of the budget for max concurrent connections for postgres, even having tuned the per-connection memory buffer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: