Skip to content

kafka-consumer(ticdc): tolerate replayed resolved and DDL events (#12596)#12621

Open
ti-chi-bot wants to merge 1 commit intopingcap:release-8.1from
ti-chi-bot:cherry-pick-12596-to-release-8.1
Open

kafka-consumer(ticdc): tolerate replayed resolved and DDL events (#12596)#12621
ti-chi-bot wants to merge 1 commit intopingcap:release-8.1from
ti-chi-bot:cherry-pick-12596-to-release-8.1

Conversation

@ti-chi-bot
Copy link
Copy Markdown
Member

This is an automated cherry-pick of #12596

What problem does this PR solve?

Issue Number: close #12595

What is changed and how it works?

  • treat replayed resolved/checkpoint fallback in cmd/kafka-consumer as duplicate delivery instead of a fatal error
  • deduplicate replayed DDL events by logical DDL identity instead of pointer identity
  • add regression tests covering replayed resolved/checkpoint handling and equivalent versus split DDL events

Check List

Tests

  • Unit test
  • Manual test

Questions

Will it cause performance regression or break compatibility?

No. This only makes the standalone Kafka consumer tolerate duplicate MQ delivery in line with TiCDC's at-least-once behavior.

Do you need to update user documentation, design documentation or monitoring documentation?

No.

Release note

Fix `cdc_kafka_consumer` to tolerate replayed resolved/checkpoint and equivalent DDL messages under duplicate MQ delivery.

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
@ti-chi-bot ti-chi-bot added lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. type/cherry-pick-for-release-8.1 This PR is cherry-picked to release-8.1 from a source PR. labels Apr 24, 2026
@ti-chi-bot
Copy link
Copy Markdown
Contributor

ti-chi-bot Bot commented Apr 24, 2026

This cherry pick PR is for a release branch and has not yet been approved by triage owners.
Adding the do-not-merge/cherry-pick-not-approved label.

To merge this cherry pick:

  1. It must be LGTMed and approved by the reviewers firstly.
  2. For pull requests to TiDB-x branches, it must have no failed tests.
  3. AFTER it has lgtm and approved labels, please wait for the cherry-pick merging approval from triage owners.
Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ti-chi-bot
Copy link
Copy Markdown
Contributor

ti-chi-bot Bot commented Apr 24, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign maxshuang for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot Bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 24, 2026
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a Kafka consumer writer for TiCDC, handling message decoding, DDL/DML events, and partition progress. The review identifies a critical data race caused by sharing a stateful decoder across partitions. Further improvements include clarifying DDL fallback log messages and resolving a busy-wait loop in the event flushing logic to optimize CPU usage.

Comment on lines +179 to +184
for i := 0; i < int(o.partitionNum); i++ {
if err != nil {
log.Panic("cannot create the decoder", zap.Error(err))
}
w.progresses[i] = newPartitionProgress(int32(i), decoder)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

A single decoder instance is being shared across all partitions. Decoders, such as simple.Decoder or open.BatchDecoder, are stateful and not safe for concurrent use. Since messages from different partitions can be processed concurrently, this will lead to data races.

A new decoder instance should be created for each partition to ensure thread safety.

 
	for i := 0; i < int(o.partitionNum); i++ {
		decoder, err := NewDecoder(ctx, o, db)
		if err != nil {
			log.Panic("cannot create the decoder", zap.Error(err))
		}
		w.progresses[i] = newPartitionProgress(int32(i), decoder)
	}

Comment on lines +227 to +230
log.Warn("DDL CommitTs < maxCommitTsDDL.CommitTs",
zap.Uint64("commitTs", ddl.CommitTs),
zap.Uint64("maxCommitTs", w.ddlWithMaxCommitTs.CommitTs),
zap.String("DDL", ddl.Query))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The log message "DDL CommitTs < maxCommitTsDDL.CommitTs" is a bit confusing as maxCommitTsDDL is not a variable in this context, and it seems to contain a typo. Consider rephrasing for clarity.

Suggested change
log.Warn("DDL CommitTs < maxCommitTsDDL.CommitTs",
zap.Uint64("commitTs", ddl.CommitTs),
zap.Uint64("maxCommitTs", w.ddlWithMaxCommitTs.CommitTs),
zap.String("DDL", ddl.Query))
log.Warn("DDL commitTs fallback detected, ignore it",
zap.Uint64("commitTs", ddl.CommitTs),
zap.Uint64("maxCommitTs", w.ddlWithMaxCommitTs.CommitTs),
zap.String("DDL", ddl.Query))

Comment on lines +559 to +561
if flushedResolvedTs {
return
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This for loop can become a busy-wait loop if flushedResolvedTs remains false, consuming unnecessary CPU cycles. To avoid this, consider adding a short sleep duration within the loop.

		if flushedResolvedTs {
			return
		}
		time.Sleep(10 * time.Millisecond)

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 24, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (release-8.1@9a390cc). Learn more about missing BASE report.

Additional details and impacted files
Components Coverage Δ
cdc ∅ <0.0000%> (?)
dm ∅ <0.0000%> (?)
engine 63.4259% <0.0000%> (?)
Flag Coverage Δ
unit 63.4259% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

@@               Coverage Diff                @@
##             release-8.1     #12621   +/-   ##
================================================
  Coverage               ?   63.4259%           
================================================
  Files                  ?        166           
  Lines                  ?      14174           
  Branches               ?          0           
================================================
  Hits                   ?       8990           
  Misses                 ?       4618           
  Partials               ?        566           
🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@ti-chi-bot
Copy link
Copy Markdown
Contributor

ti-chi-bot Bot commented Apr 24, 2026

@ti-chi-bot: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-verify eb884c3 link true /test pull-verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/cherry-pick-not-approved lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-8.1 This PR is cherry-picked to release-8.1 from a source PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants