I hope the pool library would be written in Erlang so it is easily usable from both Erlang and Elixir!
We’ve also had a real production need for a pool that has a max lifetime on connections (the underlying reason being having to go through firewalls that has a max connection lifetime of e.g. 1 hour, so we would frequently get errors when checking out a stake connection that was silently dead on the TCP level). I don’t know of any Erlang pool library that supports this.
When you talking warm-up are you referring to conns that has already prepared a query?
I don’t believe it’s the pools job to cancel anything, the caller should handle that.
This is essential, but not the pools responsibility, a data structure should be used to flag rather a connection is in use or not.
This is interesting, why would you want the pool to be responsible for this, I imagine that you would want to leverage the database to lock the tables as there could be clients that connect outside of your application.
I don’t believe the pool in it self should be concerned with multi tenancy, as user should be able to create as many independent pools as they see fit.
Yes, stats are important, especially for dynamic sizing.
I hope the pool library would be written in Erlang so it is easily usable from both Erlang and Elixir!
Unfortunately, this pool will not be it’s own library to begin with, also it will be in Elixir. I do plan to port most if the code to Erlang but currently there are some issues with some macros we use for unicode set identification which is preventing us from porting must of the code to Erlang.
No, database connections internally cache information from previous queries. Reusing that cache is what I meant. BTW, that may also be a reason to drop the connection after a period, because it could lead to excessive memory use.
For when you do a live upgrade of your schema. This way you can temporarily let your callers wait until the update is done. We generally don’t take external clients into account.
Correct, a connection shouldn’t be claimable before a caller has released it and no queries should be running when it’s checkin.
For when you do a live upgrade of your schema. This way you can temporarily let your callers wait until the update is done. We generally don’t take external clients into account.
The most common situation I’ve seen, then there has always been other technical issues that warranted a multiple release cycle. Although sounds like a neat feature but I have a hard time imagining a real world use case.
No, database connections internally cache information from previous queries. Reusing that cache is what I meant. BTW, that may also be a reason to drop the connection after a period, because it could lead to excessive memory use.
Okay, this makes sense. Connections cannot be long lived and need to be refreshed.