One of the best ways I've managed latency with MySQL is basically this:
1) use persistent connections, let the OS handle them and tweak it to allow (both connecting server and mysql server). And never close the connection on the application side. (This could lead to potential deadlocks, but there are ways around it, like closing bad connections to clear thread info on mysql).
2) run the whole thing in a transaction, simply begin transaction or autocommit if allowed (same thing)
Doing so, when you are done rendering the content, flush it and send the correct signal to say nginx or apache to say it's done (like PHP's fastcgi_finish_request when working with FPM), and then run your commit. Obviously used when you can safely disregard failed inserts.
> 1) use persistent connections, let the OS handle them and tweak it to allow (both connecting server and mysql server). And never close the connection on the application side. (This could lead to potential deadlocks, but there are ways around it, like closing bad connections to clear thread info on mysql).
This is definitely ideal, but one thing that you can't entirely control is the server side or what's between. Sometimes your connections get interrupted, and it's not possible to maintain a connection forever. Yes tho, this is the ideal thing you should do with a connection pooler.
> 2) run the whole thing in a transaction, simply begin transaction or autocommit if allowed (same thing)
This shouldn't really help with latency. Being in a transaction doesn't reduce latency. If we're being pedantic, it would likely increase latency due to having to execute a BEGIN and COMMIT query, which is typically two more round trips, one per query.
I think what you're getting at is something like pipelining, where you can send multiple queries in one request, and get multiple results back. This is technically supported over the mysql protocol, but isn't very widely adopted in practice.
We're a service provider. As a client and customer, you connect to us as a third party service. You don't control our uptime or connectivity. Nor do you control whatever network hops may be between.
1) use persistent connections, let the OS handle them and tweak it to allow (both connecting server and mysql server). And never close the connection on the application side. (This could lead to potential deadlocks, but there are ways around it, like closing bad connections to clear thread info on mysql).
2) run the whole thing in a transaction, simply begin transaction or autocommit if allowed (same thing)
Doing so, when you are done rendering the content, flush it and send the correct signal to say nginx or apache to say it's done (like PHP's fastcgi_finish_request when working with FPM), and then run your commit. Obviously used when you can safely disregard failed inserts.