Hikari connection pool failover takes a long time

In an OpenShift (3.11) cluster we’re running multiple containers with the community edition (7.12), using Hikari (3.4.1) connection pooling in Spring Boot (2.2.1) and Tomcat (9.0.27).

If there are no users in the system and the database fails over, Hikari will recreate the connection pool in around 10 seconds. If, however, there are a few users on the system when the database fails over, it can take a minute or two for Hikari to recreate the connection pool, and the users end up timing out and receive error messages.

Does anyone know how to improve the time for the connection pool repopulation? If we can keep it low, it would mean a better experience for our users.

Hi Walter,

This is a little off topic for Camunda… however I believe the Hikari pool has a connectionTimeout parameter which defaults to 30s. Hence in your property config file, you could adjust this down a liitle…

Also behaviour could depend on the JDBC driver. With older drivers, you used to set an initial SQL query used to test connections from the pool. More modern drivers automate this…

Anyway have a look at Hikari parameters…