Skip to content

Update k8s.io/client-go to version 0.30.14#1970

Open
dcoppa wants to merge 2 commits intoAltinity:0.27.0from
dcoppa:update_k8s_go_client
Open

Update k8s.io/client-go to version 0.30.14#1970
dcoppa wants to merge 2 commits intoAltinity:0.27.0from
dcoppa:update_k8s_go_client

Conversation

@dcoppa
Copy link
Copy Markdown

@dcoppa dcoppa commented Apr 26, 2026

Rationale

I would have liked to add the trafficDistribution spec to the kubernetes service of a ClickHouse cluster, like this:

apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
  name: ${cluster_name}
  namespace: ${namespace}
spec:
  defaults:
    templates:
      serviceTemplate: clickhouse-service-template

...

  templates:
    serviceTemplates:
    - name: clickhouse-service-template
      spec:
        ports:
        - name: http
          port: 8123
          protocol: TCP
          targetPort: http
        - name: tcp
          port: 9000 
          protocol: TCP
          targetPort: tcp
        trafficDistribution: PreferSameZone

But with version 0.29.x of k8s.io/client-go, spec.trafficDistribution is simply ignored, because it was introduced as a feature in Kubernetes v1.30.

It works now, after the update:

$ kubectl -n clickhouse-cluster-trial get service clickhouse-cluster01 -o yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    clickhouse.altinity.com/Service: chi
    clickhouse.altinity.com/app: chop
    clickhouse.altinity.com/chi: cluster01
    clickhouse.altinity.com/namespace: clickhouse-cluster-trial
    clickhouse.altinity.com/object-version: d923085d58e909a531ab544088b8111e8b70e618
  name: clickhouse-cluster01
  namespace: clickhouse-cluster-trial
spec:
  clusterIP: fd74:4d57:66e3::c:7a5e
  clusterIPs:
  - fd74:4d57:66e3::c:7a5e
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 8123
    targetPort: http
  - name: tcp
    port: 9000
    targetPort: tcp
  selector:
    clickhouse.altinity.com/app: chop
    clickhouse.altinity.com/chi: cluster01
    clickhouse.altinity.com/namespace: clickhouse-cluster-trial
    clickhouse.altinity.com/ready: "yes"
  trafficDistribution: PreferSameZone

The change to cmd/operator/app/thread_keeper.go was required because the cache.Options API changed between the versions of sigs.k8s.io/controller-runtime that accompany the client-go upgrade.

Checklist

  • All commits in the PR are squashed. More info
  • The PR is made into dedicated next-release branch, not into master branch1. More info
  • The PR is signed. More info

Required for using 'trafficDistribution: PreferSameZone' which was introduced in Kubernetes v1.30.

Signed-off-by: David Coppa <dcoppa@gmail.com>
Copy link
Copy Markdown
Collaborator

@sunsingerus sunsingerus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution and the clear motivation — trafficDistribution: PreferSameZone is a useful k8s 1.30 feature and the upgrade is well-scoped.

The library bump itself is fine. The blocker is client-set auto-generation isn't covered.

What's missing

Bumping k8s.io/code-generator from v0.29.14 to v0.30.14 removes generate-groups.sh — the entry point that dev/run_code_generator.sh invokes:

$ ./dev/run_code_generator.sh
ERROR: generate-groups.sh has been removed.
ERROR: Please use k8s.io/code-generator/kube_codegen.sh instead.

So after this PR lands, anyone who needs to regenerate clientset / listers / informers / deepcopy (e.g. when adding a new API field) has no working path. This needs to be addressed in this PR.

What the fix looks like

  1. Port dev/run_code_generator.sh to the new kube_codegen.sh API. The replacements are kube::codegen::gen_helpers (deepcopy, replaces the deepcopy part of generate-groups.sh client,deepcopy,informer,lister) and kube::codegen::gen_client --with-watch --one-input-api clickhouse.altinity.com (clientset + listers + informers). Verified locally — the port runs successfully against the new vendored vendor/k8s.io/code-generator/kube_codegen.sh.

  2. Regenerate zz_generated.deepcopy.go and the clientset. The new generator is stricter — where the old one emitted shallow *in = *out for fields of interface types or external structs, the new one emits DeepCopyXxx() method calls and expects the type to implement them. The old generated code was technically buggy (the shallow copies aren't real deep copies); the new behavior is more correct but exposes pre-existing gaps in our deepcopy markers/methods.

  3. Close the deepcopy gaps so the regenerated code compiles. Concretely (verified by running the ported script locally on top of this PR):

type ICustomResource has no field or method DeepCopyICustomResource
type IActionPlan has no field or method DeepCopyIActionPlan
type *messagediff.Diff has no field or method DeepCopyInto

These need either:

  • adding DeepCopyICustomResource() ICustomResource / DeepCopyIActionPlan() IActionPlan to the interfaces and implementing on each concrete type, OR
  • marking the offending fields with // +k8s:deepcopy-gen=false (since these are unexported runtime-only fields like ActionPlan.old/new/specDiff/..., false is probably the right call — they shouldn't be deep-copied anyway).
  1. Add the missing +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object marker on ClickHouseKeeperInstallation. Currently only its List type has it. The old generator was emitting DeepCopyObject() for the singular type anyway, but the new (correct) generator only emits it where the marker is present — so without the marker, the type silently stops implementing runtime.Object.

Verified locally

I merged the PR into a sandbox branch and ran through the above; the lib bump itself + your cmd/operator/app/thread_keeper.go adapter compile and unit-test cleanly. The codegen breakage only surfaces when someone tries to regenerate.

Suggestion

Either fold the codegen porting + deepcopy cleanup into this PR (preferred — keeps the upgrade self-contained), or split it into a follow-up that lands at the same time. Happy to assist with the ported dev/run_code_generator.sh if that helps.

@sunsingerus sunsingerus added the waiting reply Altinity is waiting for a reply label Apr 27, 2026
…ps for code-generator v0.30

Signed-off-by: David Coppa <dcoppa@gmail.com>
@dcoppa
Copy link
Copy Markdown
Author

dcoppa commented Apr 28, 2026

Thanks for the contribution and the clear motivation — trafficDistribution: PreferSameZone is a useful k8s 1.30 feature and the upgrade is well-scoped.

The library bump itself is fine. The blocker is client-set auto-generation isn't covered.

What's missing

Bumping k8s.io/code-generator from v0.29.14 to v0.30.14 removes generate-groups.sh — the entry point that dev/run_code_generator.sh invokes:

$ ./dev/run_code_generator.sh
ERROR: generate-groups.sh has been removed.
ERROR: Please use k8s.io/code-generator/kube_codegen.sh instead.

So after this PR lands, anyone who needs to regenerate clientset / listers / informers / deepcopy (e.g. when adding a new API field) has no working path. This needs to be addressed in this PR.

What the fix looks like

1. **Port `dev/run_code_generator.sh` to the new `kube_codegen.sh` API.** The replacements are `kube::codegen::gen_helpers` (deepcopy, replaces the `deepcopy` part of `generate-groups.sh client,deepcopy,informer,lister`) and `kube::codegen::gen_client --with-watch --one-input-api clickhouse.altinity.com` (clientset + listers + informers). Verified locally — the port runs successfully against the new vendored `vendor/k8s.io/code-generator/kube_codegen.sh`.

2. **Regenerate `zz_generated.deepcopy.go` and the clientset.** The new generator is stricter — where the old one emitted shallow `*in = *out` for fields of interface types or external structs, the new one emits `DeepCopyXxx()` method calls and expects the type to implement them. The old generated code was technically buggy (the shallow copies aren't real deep copies); the new behavior is more correct but exposes pre-existing gaps in our deepcopy markers/methods.

3. **Close the deepcopy gaps so the regenerated code compiles.** Concretely (verified by running the ported script locally on top of this PR):
type ICustomResource has no field or method DeepCopyICustomResource
type IActionPlan has no field or method DeepCopyIActionPlan
type *messagediff.Diff has no field or method DeepCopyInto

These need either:

* adding `DeepCopyICustomResource() ICustomResource` / `DeepCopyIActionPlan() IActionPlan` to the interfaces and implementing on each concrete type, OR

* marking the offending fields with `// +k8s:deepcopy-gen=false` (since these are unexported runtime-only fields like `ActionPlan.old/new/specDiff/...`, `false` is probably the right call — they shouldn't be deep-copied anyway).


4. **Add the missing `+k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object` marker on `ClickHouseKeeperInstallation`.** Currently only its `List` type has it. The old generator was emitting `DeepCopyObject()` for the singular type anyway, but the new (correct) generator only emits it where the marker is present — so without the marker, the type silently stops implementing `runtime.Object`.

Verified locally

I merged the PR into a sandbox branch and ran through the above; the lib bump itself + your cmd/operator/app/thread_keeper.go adapter compile and unit-test cleanly. The codegen breakage only surfaces when someone tries to regenerate.

Suggestion

Either fold the codegen porting + deepcopy cleanup into this PR (preferred — keeps the upgrade self-contained), or split it into a follow-up that lands at the same time. Happy to assist with the ported dev/run_code_generator.sh if that helps.

./dev/run_code_generator.sh now works properly. I hope I did everything correctly.

@sunsingerus sunsingerus added ongoing discussion Issue is under discussion, no decision made so far planned for review This feature is planned for review and removed waiting reply Altinity is waiting for a reply labels Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ongoing discussion Issue is under discussion, no decision made so far planned for review This feature is planned for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants