-
Notifications
You must be signed in to change notification settings - Fork 21.3k
Enhance Redis HA with client retry and connection health settings #34557
Description
Self Checks
- I have read the Contributing Guide and Language Policy.
- I have searched for existing issues search for existing issues, including closed ones.
- I confirm that I am using English to submit this report, otherwise it will be closed.
- Please do not modify this template :) and fill in all the required fields.
1. Is this request related to a challenge you're experiencing? Tell me about your story.
I am looking at Redis high availability behavior in the backend and found that the main Redis clients do not appear to be configured to retry failed calls during transient network issues or Redis failover windows.
The current implementation in api/extensions/ext_redis.py builds the main clients through:
redis.ConnectionPool(**redis_params)for standalone Redissentinel.master_for(...)for Sentinelredis.Redis.from_url(...)/RedisCluster.from_url(...)for pub/sub
However, the shared Redis parameters currently only include auth/db/encoding/cache settings and do not pass retry-oriented options such as:
retryretry_on_timeoutretry_on_errorsocket_timeoutsocket_connect_timeouthealth_check_interval
Since Dify is using redis-py 7.3.0. I verified that:
- clients created from
redis.ConnectionPool(...)use connections withretry._retries == 0 - Sentinel-managed connections also end up with
retry._retries == 0 Redis.from_url(...)also results in connections withretry._retries == 0
So even though Sentinel/Cluster may help with topology discovery, individual failed commands are still surfaced immediately instead of being retried by the Redis client. This weakens HA behavior in practice, especially during:
- master failover
- brief network blips
- half-open or stale socket reuse
There is already one local example in api/schedule/queue_monitor_task.py that sets socket_timeout, socket_connect_timeout, and health_check_interval, which suggests this problem is already recognized in one isolated path but not applied consistently to the main backend Redis clients.
I would like to propose enhancing Redis HA by adding a shared retry/backoff and connection health policy for all backend Redis client construction paths in api/extensions/ext_redis.py.
2. Additional context or comments
Suggested direction:
- define one shared retry policy in
api/extensions/ext_redis.py, for example withredis.retry.Retryplus exponential backoff - pass that policy into standalone, Sentinel, Cluster, and pub/sub client creation
- add conservative
socket_timeout,socket_connect_timeout, andhealth_check_intervaldefaults - optionally expose the retry and timeout knobs in config so operators can tune them for different deployments
This would make Redis behavior more resilient without forcing every call site to implement its own retry logic.
It would also complement the existing redis_fallback() decorator, which is useful for best-effort paths but does not replace transport-level retry for transient failures.
3. Can you help us with this feature?
- I am interested in contributing to this feature.