-
Notifications
You must be signed in to change notification settings - Fork 966
Service Bus Queue PeekMessages always timeouts in specific queues #26421
Copy link
Copy link
Open
Labels
ClientThis issue points to a problem in the data-plane of the library.This issue points to a problem in the data-plane of the library.Service AttentionWorkflow: This issue is responsible by Azure service team.Workflow: This issue is responsible by Azure service team.Service Buscustomer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.questionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that
Description
Bug Report
-
import path of package in question
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus
-
SDK version
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.10.0github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.2github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.11.0
-
output of
go versiongo version go1.26.0 darwin/arm64
-
What happened?
Receiver.PeekMessages()consistently times out (context deadline exceeded) for specific queues, whileReceiver.ReceiveMessages()on the same queue/receiver succeeds quickly (typically <1s).- Increasing peek timeout (including ~20s+) still times out on affected queues.
- This is not global: other queues in the same Service Bus namespace may work fine with peek.
-
What did you expect or want to happen?
PeekMessages()should have reliability similar toReceiveMessages()on the same queue.- If there is a queue-level condition for peek failures, we expect a clearer actionable error than repeated deadline exceeded.
-
How can we reproduce it?
- This is sporadic, so deterministic reproduction is difficult.
- Pattern we observe:
- Connect to a known intermittently-affected queue.
- Repeatedly call
PeekMessages()with timeout (e.g. 10–20s), then callReceiveMessages()on same queue. - In failing windows, peeks time out repeatedly while receives continue to succeed quickly.
- Run the same sequence against other queues in same namespace; many remain healthy.
Example harness:
client, _ := azservicebus.NewClient(namespace, cred, nil) r, _ := client.NewReceiverForQueue(queue, &azservicebus.ReceiverOptions{ ReceiveMode: azservicebus.ReceiveModeReceiveAndDelete, }) for i := 0; i < 20; i++ { pctx, pcancel := context.WithTimeout(context.Background(), 20*time.Second) _, perr := r.PeekMessages(pctx, 1, nil) pcancel() rctx, rcancel := context.WithTimeout(context.Background(), 15*time.Second) msgs, rerr := r.ReceiveMessages(rctx, 100, nil) rcancel() log.Printf("iter=%d peekErr=%v recvErr=%v recvCount=%d", i, perr, rerr, len(msgs)) time.Sleep(1 * time.Second) }
We understand peek uses management/RPC path while receive uses receiver data link; sharing in case this aligns with known broker-side conditions.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
ClientThis issue points to a problem in the data-plane of the library.This issue points to a problem in the data-plane of the library.Service AttentionWorkflow: This issue is responsible by Azure service team.Workflow: This issue is responsible by Azure service team.Service Buscustomer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.questionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that