Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container prepare 2026/05/15 13:53:48 Entrypoint initialization Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container place-scripts 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-0-zzv2s 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-1-pphvs 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-2-zzqn6 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-3-hwww7 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-4-xxqtv 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-5-rnts2 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-7-s6tqc 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-8-zlvl7 2026/05/15 13:53:52 Decoded script /tekton/scripts/script-9-k2n2h Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container working-dir-initializer Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-git-clone git cloning url: https://github.com/greencapitaltrade/thor.git version refs/pull/512/head:PR-512@25e6432eea99c2d790bd5fd4cc2832249314ee0a to dir: source Cloning into 'source'... From https://github.com/greencapitaltrade/thor * [new ref] refs/pull/512/head -> PR-512 Switched to branch 'PR-512' HEAD is now at 25e6432 fix: drop dead build-pool affinity, prefer stateless checked out revision: refs/pull/512/head:PR-512@25e6432eea99c2d790bd5fd4cc2832249314ee0a to dir: source Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-git-merge setup git user email jenkins-x@googlegroups.com Using SHAs from PULL_REFS=main:4667798a3509eb03fdf9ee0fba8fb1ca46e957a4,512:25e6432eea99c2d790bd5fd4cc2832249314ee0a:refs/pull/512/head using base branch main and base sha 4667798a3509eb03fdf9ee0fba8fb1ca46e957a4 merging sha: 25e6432eea99c2d790bd5fd4cc2832249314ee0a Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-jx-variables WARNING: no $GIT_SECRET_MOUNT_PATH environment variable set about to run: git clone --no-checkout --filter=blob:none --depth=1 https://github.com/greencapitaltrade/mcu.git /tmp/jx-git-2479727675 in dir /tmp Cloning into '/tmp/jx-git-2479727675'... about to run: git sparse-checkout set --no-cone jx-requirements.yml .jx/gitops/source-config.yaml in dir /tmp/jx-git-2479727675 about to run: git checkout in dir /tmp/jx-git-2479727675 Your branch is up to date with 'origin/main'. export APP_NAME='thor' export BRANCH_NAME='PR-512' export BUILD_NUMBER='11' export DOCKERFILE_PATH='Dockerfile' export DOCKER_REGISTRY='417755753340.dkr.ecr.ap-south-1.amazonaws.com' export DOCKER_REGISTRY_ORG='greencapitaltrade' export DOMAIN='gc-t.in' about to run: git rev-parse --abbrev-ref HEAD in dir . main export GIT_BRANCH='main' export JENKINS_X_URL='https://dashboard-jx.gc-t.in/greencapitaltrade/thor/PR-512/11' export JX_CHART_REPOSITORY='http://jenkins-x-chartmuseum.jx.svc.cluster.local:8080' export KANIKO_FLAGS='--skip-unused-stages --snapshotMode=redo' export MINK_AS='tekton-bot' version file VERSION does not exist export MINK_IMAGE='417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-11-SNAPSHOT' export NAMESPACE_SUB_DOMAIN='-jx.' export PIPELINE_KIND='pullrequest' export PUSH_CONTAINER_REGISTRY='417755753340.dkr.ecr.ap-south-1.amazonaws.com' export REPO_NAME='thor' export REPO_OWNER='greencapitaltrade' version file VERSION does not exist export VERSION='0.0.0-PR-512-11-SNAPSHOT' added variables to file: .jx/variables.sh added variables to file: .jx/variables.sh Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-build-npm-install === NPM Config Debug === NPM_CONFIG_USERCONFIG: /tekton/home/npm/.npmrc total 0 drwxrwxrwt. 3 root root 100 May 15 13:53 . drwxrwxrwx. 4 root root 74 May 15 13:54 .. drwxr-xr-x. 2 root root 60 May 15 13:53 ..2026_05_15_13_53_42.1312418655 lrwxrwxrwx. 1 root root 32 May 15 13:53 ..data -> ..2026_05_15_13_53_42.1312418655 lrwxrwxrwx. 1 root root 13 May 15 13:53 .npmrc -> ..data/.npmrc registry=http://nexus-nexus-repository-manager.jx.svc.cluster.local:8081/repository/npm-group/ //nexus-nexus-repository-manager.jx.svc.cluster.local:8081/repository/npm-group/:_auth=YWRtaW46YWRtaW4xMjM= @greencapitaltrade:registry=https://npm.pkg.github.com/ //npm.pkg.github.com/:_authToken=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=== npm config list === ; "user" config from /tekton/home/npm/.npmrc @greencapitaltrade:registry = "https://npm.pkg.github.com/" //nexus-nexus-repository-manager.jx.svc.cluster.local:8081/repository/npm-group/:_auth = (protected) //npm.pkg.github.com/:_authToken = (protected) registry = "http://nexus-nexus-repository-manager.jx.svc.cluster.local:8081/repository/npm-group/" ; "env" config from environment userconfig = "/tekton/home/npm/.npmrc" ; node bin location = /usr/local/bin/node ; node version = v25.9.0 ; npm local prefix = /workspace/source ; npm version = 11.12.1 ; cwd = /workspace/source ; HOME = /tekton/home ; Run `npm config ls -l` to show all defaults. === npm ci === npm warn deprecated supertest@6.3.4: Please upgrade to supertest v7.1.3+, see release notes at https://github.com/forwardemail/supertest/releases/tag/v7.1.3 - maintenance is supported by Forward Email @ https://forwardemail.net npm warn deprecated rimraf@3.0.2: Rimraf versions prior to v4 are no longer supported npm warn deprecated keygrip@1.1.0: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info. npm warn deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful. npm warn deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported npm warn deprecated @humanwhocodes/config-array@0.13.0: Use @eslint/config-array instead npm warn deprecated @humanwhocodes/object-schema@2.0.3: Use @eslint/object-schema instead npm warn deprecated superagent@8.1.2: Please upgrade to superagent v10.2.2+, see release notes at https://github.com/forwardemail/superagent/releases/tag/v10.2.2 - maintenance is supported by Forward Email @ https://forwardemail.net npm warn deprecated node-domexception@1.0.0: Use your platform's native DOMException instead npm warn deprecated moize@5.4.5: This library has been deprecated in favor of micro-memoize, which as-of version 5 incorporates most of the functionality that this library offers at nearly half the size and better speed. npm warn deprecated eslint@8.57.1: This version is no longer supported. Please see https://eslint.org/version-support for other options. > thor@0.0.1 prepare > husky install husky - git command not found, skipping install added 771 packages in 24s 113 packages are looking for funding run `npm fund` for details Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-build-run-linting Get:1 http://deb.debian.org/debian trixie InRelease [140 kB] Get:2 http://deb.debian.org/debian-security trixie-security InRelease [43.4 kB] Get:3 http://deb.debian.org/debian trixie-updates InRelease [47.3 kB] Get:4 http://deb.debian.org/debian bookworm InRelease [151 kB] Get:5 http://deb.debian.org/debian bookworm-updates InRelease [55.4 kB] Get:6 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB] Get:7 http://deb.debian.org/debian trixie/main amd64 Packages [9671 kB] Get:8 http://deb.debian.org/debian-security trixie-security/main amd64 Packages [161 kB] Get:9 http://deb.debian.org/debian trixie-updates/main amd64 Packages [5412 B] Get:10 http://deb.debian.org/debian bookworm/main amd64 Packages [8792 kB] Get:11 http://deb.debian.org/debian bookworm-updates/main amd64 Packages [6924 B] Get:12 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [305 kB] Fetched 19.4 MB in 6s (3282 kB/s) Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following additional packages will be installed: base-files git-man libbrotli1 libc-bin libc6 libcurl3t64-gnutls libdb5.3t64 liberror-perl libexpat1 libgdbm-compat4t64 libgdbm6t64 libgmp10 libgnutls30t64 libgssapi-krb5-2 libhogweed6t64 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libldap2 libnettle8t64 libnghttp2-14 libnghttp3-9 libngtcp2-16 libngtcp2-crypto-gnutls8 libp11-kit0 libperl5.40 libpsl5t64 librtmp1 libsasl2-2 libsasl2-modules-db libssh2-1t64 libssl3t64 libunistring5 libzstd1 openssl-provider-legacy perl perl-base perl-modules-5.40 Suggested packages: gettext-base git-doc git-email git-gui gitk gitweb git-cvs git-mediawiki git-svn glibc-doc libc-l10n locales libnss-nis libnss-nisplus gdbm-l10n gnutls-bin krb5-doc krb5-user sensible-utils perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl make libtap-harness-archive-perl Recommended packages: ca-certificates patch less ssh-client manpages krb5-locales libldap-common publicsuffix libsasl2-modules netbase The following packages will be REMOVED: libdb5.3 libgnutls30 libhogweed6 libnettle8 The following NEW packages will be installed: git git-man libbrotli1 libcurl3t64-gnutls libdb5.3t64 liberror-perl libexpat1 libgdbm-compat4t64 libgdbm6t64 libgnutls30t64 libgssapi-krb5-2 libhogweed6t64 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libldap2 libnettle8t64 libnghttp2-14 libnghttp3-9 libngtcp2-16 libngtcp2-crypto-gnutls8 libperl5.40 libpsl5t64 librtmp1 libsasl2-2 libsasl2-modules-db libssh2-1t64 libssl3t64 libunistring5 openssl-provider-legacy perl perl-modules-5.40 The following packages will be upgraded: base-files libc-bin libc6 libgmp10 libp11-kit0 libzstd1 perl-base 7 upgraded, 33 newly installed, 4 to remove and 75 not upgraded. Need to get 33.7 MB of archives. After this operation, 123 MB of additional disk space will be used. Get:1 http://deb.debian.org/debian trixie/main amd64 base-files amd64 13.8+deb13u4 [73.3 kB] Get:2 http://deb.debian.org/debian trixie/main amd64 libc6 amd64 2.41-12+deb13u2 [2847 kB] Get:3 http://deb.debian.org/debian trixie/main amd64 libc-bin amd64 2.41-12+deb13u2 [638 kB] Get:4 http://deb.debian.org/debian trixie/main amd64 perl-base amd64 5.40.1-6 [1666 kB] Get:5 http://deb.debian.org/debian trixie/main amd64 libdb5.3t64 amd64 5.3.28+dfsg2-9 [704 kB] Get:6 http://deb.debian.org/debian trixie/main amd64 libgmp10 amd64 2:6.3.0+dfsg-3 [560 kB] Get:7 http://deb.debian.org/debian trixie/main amd64 libnettle8t64 amd64 3.10.1-1 [305 kB] Get:8 http://deb.debian.org/debian trixie/main amd64 libhogweed6t64 amd64 3.10.1-1 [335 kB] Get:9 http://deb.debian.org/debian trixie/main amd64 libp11-kit0 amd64 0.25.5-3 [425 kB] Get:10 http://deb.debian.org/debian trixie/main amd64 libunistring5 amd64 1.3-2 [477 kB] Get:11 http://deb.debian.org/debian trixie/main amd64 libgnutls30t64 amd64 3.8.9-3+deb13u2 [1468 kB] Get:12 http://deb.debian.org/debian trixie/main amd64 libexpat1 amd64 2.7.1-2 [108 kB] Get:13 http://deb.debian.org/debian trixie/main amd64 libzstd1 amd64 1.5.7+dfsg-1 [304 kB] Get:14 http://deb.debian.org/debian trixie/main amd64 perl-modules-5.40 all 5.40.1-6 [3019 kB] Get:15 http://deb.debian.org/debian trixie/main amd64 libgdbm6t64 amd64 1.24-2 [75.2 kB] Get:16 http://deb.debian.org/debian trixie/main amd64 libgdbm-compat4t64 amd64 1.24-2 [50.3 kB] Get:17 http://deb.debian.org/debian trixie/main amd64 libperl5.40 amd64 5.40.1-6 [4341 kB] Get:18 http://deb.debian.org/debian trixie/main amd64 perl amd64 5.40.1-6 [267 kB] Get:19 http://deb.debian.org/debian trixie/main amd64 libbrotli1 amd64 1.1.0-2+b7 [307 kB] Get:20 http://deb.debian.org/debian trixie/main amd64 libkrb5support0 amd64 1.21.3-5 [33.0 kB] Get:21 http://deb.debian.org/debian trixie/main amd64 libk5crypto3 amd64 1.21.3-5 [81.5 kB] Get:22 http://deb.debian.org/debian trixie/main amd64 libkeyutils1 amd64 1.6.3-6 [9456 B] Get:23 http://deb.debian.org/debian-security trixie-security/main amd64 openssl-provider-legacy amd64 3.5.5-1~deb13u2 [311 kB] Get:24 http://deb.debian.org/debian-security trixie-security/main amd64 libssl3t64 amd64 3.5.5-1~deb13u2 [2445 kB] Get:25 http://deb.debian.org/debian trixie/main amd64 libkrb5-3 amd64 1.21.3-5 [326 kB] Get:26 http://deb.debian.org/debian trixie/main amd64 libgssapi-krb5-2 amd64 1.21.3-5 [138 kB] Get:27 http://deb.debian.org/debian trixie/main amd64 libsasl2-modules-db amd64 2.1.28+dfsg1-9 [19.8 kB] Get:28 http://deb.debian.org/debian trixie/main amd64 libsasl2-2 amd64 2.1.28+dfsg1-9 [57.5 kB] Get:29 http://deb.debian.org/debian trixie/main amd64 libldap2 amd64 2.6.10+dfsg-1 [194 kB] Get:30 http://deb.debian.org/debian-security trixie-security/main amd64 libnghttp2-14 amd64 1.64.0-1.1+deb13u1 [76.2 kB] Get:31 http://deb.debian.org/debian trixie/main amd64 libnghttp3-9 amd64 1.8.0-1 [67.7 kB] Get:32 http://deb.debian.org/debian-security trixie-security/main amd64 libngtcp2-16 amd64 1.11.0-1+deb13u1 [132 kB] Get:33 http://deb.debian.org/debian-security trixie-security/main amd64 libngtcp2-crypto-gnutls8 amd64 1.11.0-1+deb13u1 [29.5 kB] Get:34 http://deb.debian.org/debian trixie/main amd64 libpsl5t64 amd64 0.21.2-1.1+b1 [57.2 kB] Get:35 http://deb.debian.org/debian trixie/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d.1-2+b5 [58.8 kB] Get:36 http://deb.debian.org/debian trixie/main amd64 libssh2-1t64 amd64 1.11.1-1 [245 kB] Get:37 http://deb.debian.org/debian trixie/main amd64 libcurl3t64-gnutls amd64 8.14.1-2+deb13u2 [383 kB] Get:38 http://deb.debian.org/debian trixie/main amd64 liberror-perl all 0.17030-1 [26.9 kB] Get:39 http://deb.debian.org/debian trixie/main amd64 git-man all 1:2.47.3-0+deb13u1 [2205 kB] Get:40 http://deb.debian.org/debian trixie/main amd64 git amd64 1:2.47.3-0+deb13u1 [8862 kB] debconf: delaying package configuration, since apt-utils is not installed Fetched 33.7 MB in 0s (93.7 MB/s) (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6099 files and directories currently installed.) Preparing to unpack .../base-files_13.8+deb13u4_amd64.deb ... Unpacking base-files (13.8+deb13u4) over (12.4+deb12u13) ... Setting up base-files (13.8+deb13u4) ... Installing new version of config file /etc/debian_version ... Installing new version of config file /etc/issue ... Installing new version of config file /etc/issue.net ... Updating /etc/profile to current default. Updating /root/.profile to current default. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6110 files and directories currently installed.) Preparing to unpack .../libc6_2.41-12+deb13u2_amd64.deb ... debconf: unable to initialize frontend: Dialog debconf: (TERM is not set, so the dialog frontend is not usable.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.36.0 /usr/local/share/perl/5.36.0 /usr/lib/x86_64-linux-gnu/perl5/5.36 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.36 /usr/share/perl/5.36 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.) debconf: falling back to frontend: Teletype Checking for services that may need to be restarted... Checking init scripts... Unpacking libc6:amd64 (2.41-12+deb13u2) over (2.36-9+deb12u13) ... Setting up libc6:amd64 (2.41-12+deb13u2) ... debconf: unable to initialize frontend: Dialog debconf: (TERM is not set, so the dialog frontend is not usable.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.36.0 /usr/local/share/perl/5.36.0 /usr/lib/x86_64-linux-gnu/perl5/5.36 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.36 /usr/share/perl/5.36 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.) debconf: falling back to frontend: Teletype Checking for services that may need to be restarted... Checking init scripts... Nothing to restart. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6112 files and directories currently installed.) Preparing to unpack .../libc-bin_2.41-12+deb13u2_amd64.deb ... Unpacking libc-bin (2.41-12+deb13u2) over (2.36-9+deb12u13) ... Setting up libc-bin (2.41-12+deb13u2) ... (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6112 files and directories currently installed.) Preparing to unpack .../perl-base_5.40.1-6_amd64.deb ... Unpacking perl-base (5.40.1-6) over (5.36.0-7+deb12u3) ... Setting up perl-base (5.40.1-6) ... dpkg: libdb5.3:amd64: dependency problems, but removing anyway as you requested: libpam-modules:amd64 depends on libdb5.3. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6117 files and directories currently installed.) Removing libdb5.3:amd64 (5.3.28+dfsg2-1) ... Selecting previously unselected package libdb5.3t64:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6111 files and directories currently installed.) Preparing to unpack .../libdb5.3t64_5.3.28+dfsg2-9_amd64.deb ... Unpacking libdb5.3t64:amd64 (5.3.28+dfsg2-9) ... Setting up libdb5.3t64:amd64 (5.3.28+dfsg2-9) ... (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6117 files and directories currently installed.) Preparing to unpack .../libgmp10_2%3a6.3.0+dfsg-3_amd64.deb ... Unpacking libgmp10:amd64 (2:6.3.0+dfsg-3) over (2:6.2.1+dfsg1-1.1) ... Setting up libgmp10:amd64 (2:6.3.0+dfsg-3) ... dpkg: libnettle8:amd64: dependency problems, but removing anyway as you requested: libhogweed6:amd64 depends on libnettle8. libgnutls30:amd64 depends on libnettle8 (>= 3.7~). (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6117 files and directories currently installed.) Removing libnettle8:amd64 (3.8.1-2) ... Selecting previously unselected package libnettle8t64:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6109 files and directories currently installed.) Preparing to unpack .../libnettle8t64_3.10.1-1_amd64.deb ... Unpacking libnettle8t64:amd64 (3.10.1-1) ... dpkg: libhogweed6:amd64: dependency problems, but removing anyway as you requested: libgnutls30:amd64 depends on libhogweed6 (>= 3.6). (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6118 files and directories currently installed.) Removing libhogweed6:amd64 (3.8.1-2) ... Selecting previously unselected package libhogweed6t64:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6112 files and directories currently installed.) Preparing to unpack .../libhogweed6t64_3.10.1-1_amd64.deb ... Unpacking libhogweed6t64:amd64 (3.10.1-1) ... Preparing to unpack .../libp11-kit0_0.25.5-3_amd64.deb ... Unpacking libp11-kit0:amd64 (0.25.5-3) over (0.24.1-2) ... Selecting previously unselected package libunistring5:amd64. Preparing to unpack .../libunistring5_1.3-2_amd64.deb ... Unpacking libunistring5:amd64 (1.3-2) ... dpkg: libgnutls30:amd64: dependency problems, but removing anyway as you requested: apt depends on libgnutls30 (>= 3.7.5). (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6125 files and directories currently installed.) Removing libgnutls30:amd64 (3.7.9-2+deb12u6) ... Selecting previously unselected package libgnutls30t64:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6096 files and directories currently installed.) Preparing to unpack .../libgnutls30t64_3.8.9-3+deb13u2_amd64.deb ... Unpacking libgnutls30t64:amd64 (3.8.9-3+deb13u2) ... Setting up libnettle8t64:amd64 (3.10.1-1) ... Setting up libhogweed6t64:amd64 (3.10.1-1) ... Setting up libp11-kit0:amd64 (0.25.5-3) ... Setting up libunistring5:amd64 (1.3-2) ... Setting up libgnutls30t64:amd64 (3.8.9-3+deb13u2) ... Selecting previously unselected package libexpat1:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6125 files and directories currently installed.) Preparing to unpack .../libexpat1_2.7.1-2_amd64.deb ... Unpacking libexpat1:amd64 (2.7.1-2) ... Preparing to unpack .../libzstd1_1.5.7+dfsg-1_amd64.deb ... Unpacking libzstd1:amd64 (1.5.7+dfsg-1) over (1.5.4+dfsg2-5) ... Setting up libzstd1:amd64 (1.5.7+dfsg-1) ... Selecting previously unselected package perl-modules-5.40. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 6134 files and directories currently installed.) Preparing to unpack .../00-perl-modules-5.40_5.40.1-6_all.deb ... Unpacking perl-modules-5.40 (5.40.1-6) ... Selecting previously unselected package libgdbm6t64:amd64. Preparing to unpack .../01-libgdbm6t64_1.24-2_amd64.deb ... Unpacking libgdbm6t64:amd64 (1.24-2) ... Selecting previously unselected package libgdbm-compat4t64:amd64. Preparing to unpack .../02-libgdbm-compat4t64_1.24-2_amd64.deb ... Unpacking libgdbm-compat4t64:amd64 (1.24-2) ... Selecting previously unselected package libperl5.40:amd64. Preparing to unpack .../03-libperl5.40_5.40.1-6_amd64.deb ... Unpacking libperl5.40:amd64 (5.40.1-6) ... Selecting previously unselected package perl. Preparing to unpack .../04-perl_5.40.1-6_amd64.deb ... Unpacking perl (5.40.1-6) ... Selecting previously unselected package libbrotli1:amd64. Preparing to unpack .../05-libbrotli1_1.1.0-2+b7_amd64.deb ... Unpacking libbrotli1:amd64 (1.1.0-2+b7) ... Selecting previously unselected package libkrb5support0:amd64. Preparing to unpack .../06-libkrb5support0_1.21.3-5_amd64.deb ... Unpacking libkrb5support0:amd64 (1.21.3-5) ... Selecting previously unselected package libk5crypto3:amd64. Preparing to unpack .../07-libk5crypto3_1.21.3-5_amd64.deb ... Unpacking libk5crypto3:amd64 (1.21.3-5) ... Selecting previously unselected package libkeyutils1:amd64. Preparing to unpack .../08-libkeyutils1_1.6.3-6_amd64.deb ... Unpacking libkeyutils1:amd64 (1.6.3-6) ... Selecting previously unselected package openssl-provider-legacy. Preparing to unpack .../09-openssl-provider-legacy_3.5.5-1~deb13u2_amd64.deb ... Unpacking openssl-provider-legacy (3.5.5-1~deb13u2) ... Selecting previously unselected package libssl3t64:amd64. Preparing to unpack .../10-libssl3t64_3.5.5-1~deb13u2_amd64.deb ... Unpacking libssl3t64:amd64 (3.5.5-1~deb13u2) ... Selecting previously unselected package libkrb5-3:amd64. Preparing to unpack .../11-libkrb5-3_1.21.3-5_amd64.deb ... Unpacking libkrb5-3:amd64 (1.21.3-5) ... Selecting previously unselected package libgssapi-krb5-2:amd64. Preparing to unpack .../12-libgssapi-krb5-2_1.21.3-5_amd64.deb ... Unpacking libgssapi-krb5-2:amd64 (1.21.3-5) ... Selecting previously unselected package libsasl2-modules-db:amd64. Preparing to unpack .../13-libsasl2-modules-db_2.1.28+dfsg1-9_amd64.deb ... Unpacking libsasl2-modules-db:amd64 (2.1.28+dfsg1-9) ... Selecting previously unselected package libsasl2-2:amd64. Preparing to unpack .../14-libsasl2-2_2.1.28+dfsg1-9_amd64.deb ... Unpacking libsasl2-2:amd64 (2.1.28+dfsg1-9) ... Selecting previously unselected package libldap2:amd64. Preparing to unpack .../15-libldap2_2.6.10+dfsg-1_amd64.deb ... Unpacking libldap2:amd64 (2.6.10+dfsg-1) ... Selecting previously unselected package libnghttp2-14:amd64. Preparing to unpack .../16-libnghttp2-14_1.64.0-1.1+deb13u1_amd64.deb ... Unpacking libnghttp2-14:amd64 (1.64.0-1.1+deb13u1) ... Selecting previously unselected package libnghttp3-9:amd64. Preparing to unpack .../17-libnghttp3-9_1.8.0-1_amd64.deb ... Unpacking libnghttp3-9:amd64 (1.8.0-1) ... Selecting previously unselected package libngtcp2-16:amd64. Preparing to unpack .../18-libngtcp2-16_1.11.0-1+deb13u1_amd64.deb ... Unpacking libngtcp2-16:amd64 (1.11.0-1+deb13u1) ... Selecting previously unselected package libngtcp2-crypto-gnutls8:amd64. Preparing to unpack .../19-libngtcp2-crypto-gnutls8_1.11.0-1+deb13u1_amd64.deb ... Unpacking libngtcp2-crypto-gnutls8:amd64 (1.11.0-1+deb13u1) ... Selecting previously unselected package libpsl5t64:amd64. Preparing to unpack .../20-libpsl5t64_0.21.2-1.1+b1_amd64.deb ... Unpacking libpsl5t64:amd64 (0.21.2-1.1+b1) ... Selecting previously unselected package librtmp1:amd64. Preparing to unpack .../21-librtmp1_2.4+20151223.gitfa8646d.1-2+b5_amd64.deb ... Unpacking librtmp1:amd64 (2.4+20151223.gitfa8646d.1-2+b5) ... Selecting previously unselected package libssh2-1t64:amd64. Preparing to unpack .../22-libssh2-1t64_1.11.1-1_amd64.deb ... Unpacking libssh2-1t64:amd64 (1.11.1-1) ... Selecting previously unselected package libcurl3t64-gnutls:amd64. Preparing to unpack .../23-libcurl3t64-gnutls_8.14.1-2+deb13u2_amd64.deb ... Unpacking libcurl3t64-gnutls:amd64 (8.14.1-2+deb13u2) ... Selecting previously unselected package liberror-perl. Preparing to unpack .../24-liberror-perl_0.17030-1_all.deb ... Unpacking liberror-perl (0.17030-1) ... Selecting previously unselected package git-man. Preparing to unpack .../25-git-man_1%3a2.47.3-0+deb13u1_all.deb ... Unpacking git-man (1:2.47.3-0+deb13u1) ... Selecting previously unselected package git. Preparing to unpack .../26-git_1%3a2.47.3-0+deb13u1_amd64.deb ... Unpacking git (1:2.47.3-0+deb13u1) ... Setting up libexpat1:amd64 (2.7.1-2) ... Setting up libkeyutils1:amd64 (1.6.3-6) ... Setting up libgdbm6t64:amd64 (1.24-2) ... Setting up libgdbm-compat4t64:amd64 (1.24-2) ... Setting up libbrotli1:amd64 (1.1.0-2+b7) ... Setting up libpsl5t64:amd64 (0.21.2-1.1+b1) ... Setting up libnghttp2-14:amd64 (1.64.0-1.1+deb13u1) ... Setting up libkrb5support0:amd64 (1.21.3-5) ... Setting up libsasl2-modules-db:amd64 (2.1.28+dfsg1-9) ... Setting up librtmp1:amd64 (2.4+20151223.gitfa8646d.1-2+b5) ... Setting up libk5crypto3:amd64 (1.21.3-5) ... Setting up libnghttp3-9:amd64 (1.8.0-1) ... Setting up perl-modules-5.40 (5.40.1-6) ... Setting up git-man (1:2.47.3-0+deb13u1) ... Setting up libngtcp2-16:amd64 (1.11.0-1+deb13u1) ... Setting up libngtcp2-crypto-gnutls8:amd64 (1.11.0-1+deb13u1) ... Setting up libperl5.40:amd64 (5.40.1-6) ... Setting up perl (5.40.1-6) ... Setting up liberror-perl (0.17030-1) ... Setting up openssl-provider-legacy (3.5.5-1~deb13u2) ... Setting up libssl3t64:amd64 (3.5.5-1~deb13u2) ... Setting up libsasl2-2:amd64 (2.1.28+dfsg1-9) ... Setting up libkrb5-3:amd64 (1.21.3-5) ... Setting up libssh2-1t64:amd64 (1.11.1-1) ... Setting up libldap2:amd64 (2.6.10+dfsg-1) ... Setting up libgssapi-krb5-2:amd64 (1.21.3-5) ... Setting up libcurl3t64-gnutls:amd64 (8.14.1-2+deb13u2) ... Setting up git (1:2.47.3-0+deb13u1) ... Processing triggers for libc-bin (2.41-12+deb13u2) ... Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-build-lint-errors Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-check-registry sourcing .jx/variables.sh verifying the container registry is setup WARNING: no $GIT_SECRET_MOUNT_PATH environment variable set about to run: git clone --no-checkout --filter=blob:none --depth=1 https://github.com/greencapitaltrade/mcu.git /tmp/jx-git-3052813956 in dir /tmp Cloning into '/tmp/jx-git-3052813956'... about to run: git sparse-checkout set --no-cone jx-requirements.yml .jx/gitops/source-config.yaml in dir /tmp/jx-git-3052813956 about to run: git checkout in dir /tmp/jx-git-3052813956 Your branch is up to date with 'origin/main'. verifying that container registry 417755753340.dkr.ecr.ap-south-1.amazonaws.com with organisation greencapitaltrade and app name thor has an ECR associated with it loading config with AWS region: 'ap-south-1' Let's ensure that we have an ECR repository for the image greencapitaltrade/thor Found repository: greencapitaltrade/thor Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-build-container-build time="2026-05-15T13:56:04Z" level=warning msg="Flag --snapshotMode is deprecated. Use: --snapshot-mode" INFO[0000] Using dockerignore file: /workspace/source/.dockerignore INFO[0000] Retrieving image manifest node:25-slim INFO[0000] Retrieving image node:25-slim from registry index.docker.io INFO[0002] Built cross stage deps: map[] INFO[0002] Retrieving image manifest node:25-slim INFO[0002] Returning cached image manifest INFO[0002] Executing 0 build triggers INFO[0002] Building stage 'node:25-slim' [idx: '0', base-idx: '-1'] INFO[0002] Unpacking rootfs as cmd COPY . . requires it. INFO[0010] COPY . . INFO[0027] Taking snapshot of files... INFO[0039] CMD ["sh", "-c", "(sequelize db:create || true) && sequelize db:migrate && exec node index.js"] INFO[0039] Pushing image to 417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-11-SNAPSHOT INFO[0041] Pushed 417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor@sha256:5cda6c15bdb4de4972d8665f300adf781deb83bf91267706f09ad3d8645d997b Showing logs for build greencapitaltrade/thor/pr-512 #11 stage from-build-pack and container step-promote-jx-preview found PullRequest https://github.com/greencapitaltrade/thor/pull/512 using ingress sub domain -jx-greencapitaltrade-thor-pr-512. upserted preview jx-greencapitaltrade-thor-pr-512 passing env vars into helmfile: map[string]string{"APP_NAME":"thor", "DOCKER_REGISTRY":"417755753340.dkr.ecr.ap-south-1.amazonaws.com", "DOCKER_REGISTRY_ORG":"greencapitaltrade", "PREVIEW_NAMESPACE":"jx-greencapitaltrade-thor-pr-512", "VERSION":"0.0.0-PR-512-11-SNAPSHOT"} jx-greencapitaltrade-thor-pr-512: LAST SEEN TYPE REASON OBJECT MESSAGE jx-greencapitaltrade-thor-pr-512: 25m Normal Killing pod/kafka-cluster-broker-0 Stopping container kafka jx-greencapitaltrade-thor-pr-512: 25m Normal Scheduled pod/kafka-cluster-broker-0 Successfully assigned jx-greencapitaltrade-thor-pr-512/kafka-cluster-broker-0 to ip-10-0-62-105.ap-south-1.compute.internal jx-greencapitaltrade-thor-pr-512: 25m Normal Pulling pod/kafka-cluster-broker-0 Pulling image "quay.io/strimzi/kafka:0.48.0-kafka-4.1.0" jx-greencapitaltrade-thor-pr-512: 24m Normal Pulled pod/kafka-cluster-broker-0 Successfully pulled image "quay.io/strimzi/kafka:0.48.0-kafka-4.1.0" in 1m14.44s (1m14.44s including waiting). Image size: 402118975 bytes. jx-greencapitaltrade-thor-pr-512: 20m Normal Created pod/kafka-cluster-broker-0 Created container: kafka jx-greencapitaltrade-thor-pr-512: 20m Normal Started pod/kafka-cluster-broker-0 Started container kafka jx-greencapitaltrade-thor-pr-512: 19m Normal Pulled pod/kafka-cluster-broker-0 Container image "quay.io/strimzi/kafka:0.48.0-kafka-4.1.0" already present on machine jx-greencapitaltrade-thor-pr-512: 22m Warning Unhealthy pod/kafka-cluster-broker-0 Readiness probe failed: % Total % Received % Xferd Average Speed Time Time Time Current... jx-greencapitaltrade-thor-pr-512: 14s Warning BackOff pod/kafka-cluster-broker-0 Back-off restarting failed container kafka in pod kafka-cluster-broker-0_jx-greencapitaltrade-thor-pr-512(932ab3ee-c316-49de-b8a2-09e7568b29e3) jx-greencapitaltrade-thor-pr-512: 21m Warning Unhealthy pod/kafka-cluster-broker-0 Readiness probe failed: % Total % Received % Xferd Average Speed Time Time Time Current... jx-greencapitaltrade-thor-pr-512: 20m Warning Unhealthy pod/kafka-cluster-broker-0 Readiness probe failed: % Total % Received % Xferd Average Speed Time Time Time Current... jx-greencapitaltrade-thor-pr-512: 23m Normal Killing pod/kafka-cluster-controller-1 Stopping container kafka jx-greencapitaltrade-thor-pr-512: 23m Normal Scheduled pod/kafka-cluster-controller-1 Successfully assigned jx-greencapitaltrade-thor-pr-512/kafka-cluster-controller-1 to ip-10-0-74-255.ap-south-1.compute.internal jx-greencapitaltrade-thor-pr-512: 23m Normal Pulling pod/kafka-cluster-controller-1 Pulling image "quay.io/strimzi/kafka:0.48.0-kafka-4.1.0" jx-greencapitaltrade-thor-pr-512: 22m Normal Pulled pod/kafka-cluster-controller-1 Successfully pulled image "quay.io/strimzi/kafka:0.48.0-kafka-4.1.0" in 31.332s (31.332s including waiting). Image size: 402118975 bytes. jx-greencapitaltrade-thor-pr-512: 22m Normal Created pod/kafka-cluster-controller-1 Created container: kafka jx-greencapitaltrade-thor-pr-512: 22m Normal Started pod/kafka-cluster-controller-1 Started container kafka jx-greencapitaltrade-thor-pr-512: 2m4s Normal NotTriggerScaleUp pod/postgresql-0 pod didn't trigger scale-up: 1 node(s) didn't match Pod's node affinity/selector, 1 max node group size reached jx-greencapitaltrade-thor-pr-512: 7m6s Normal NotTriggerScaleUp pod/postgresql-0 pod didn't trigger scale-up: 1 max node group size reached, 1 node(s) didn't match Pod's node affinity/selector jx-greencapitaltrade-thor-pr-512: 32m Warning FailedScheduling pod/postgresql-0 0/7 nodes are available: 7 node(s) didn't match Pod's node affinity/selector. preemption: 0/7 nodes are available: 7 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 53m Warning FailedScheduling pod/postgresql-0 0/8 nodes are available: 8 node(s) didn't match Pod's node affinity/selector. preemption: 0/8 nodes are available: 8 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 48m Warning FailedScheduling pod/postgresql-0 0/9 nodes are available: 1 node(s) were unschedulable, 8 node(s) didn't match Pod's node affinity/selector. preemption: 0/9 nodes are available: 9 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 16m Warning FailedScheduling pod/postgresql-0 0/10 nodes are available: 1 node(s) were unschedulable, 9 node(s) didn't match Pod's node affinity/selector. preemption: 0/10 nodes are available: 10 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 22m Warning FailedScheduling pod/postgresql-0 0/11 nodes are available: 2 node(s) were unschedulable, 9 node(s) didn't match Pod's node affinity/selector. preemption: 0/11 nodes are available: 11 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 96s Warning FailedScheduling pod/postgresql-0 0/9 nodes are available: 9 node(s) didn't match Pod's node affinity/selector. preemption: 0/9 nodes are available: 9 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 27m Warning BackOff pod/preview-thor-7b9967997d-jtmzb Back-off restarting failed container thor in pod preview-thor-7b9967997d-jtmzb_jx-greencapitaltrade-thor-pr-512(c3cc6a90-b1ce-4747-9cd0-95e2858ed63c) jx-greencapitaltrade-thor-pr-512: 25m Normal Scheduled pod/preview-thor-7b9967997d-sltmb Successfully assigned jx-greencapitaltrade-thor-pr-512/preview-thor-7b9967997d-sltmb to ip-10-0-62-105.ap-south-1.compute.internal jx-greencapitaltrade-thor-pr-512: 25m Normal Pulling pod/preview-thor-7b9967997d-sltmb Pulling image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-9-SNAPSHOT" jx-greencapitaltrade-thor-pr-512: 24m Normal Pulled pod/preview-thor-7b9967997d-sltmb Successfully pulled image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-9-SNAPSHOT" in 28.297s (28.297s including waiting). Image size: 162128725 bytes. jx-greencapitaltrade-thor-pr-512: 23m Normal Created pod/preview-thor-7b9967997d-sltmb Created container: thor jx-greencapitaltrade-thor-pr-512: 23m Normal Started pod/preview-thor-7b9967997d-sltmb Started container thor jx-greencapitaltrade-thor-pr-512: 23m Warning Unhealthy pod/preview-thor-7b9967997d-sltmb Readiness probe failed: Get "http://10.0.52.75:8080/_healthz": dial tcp 10.0.52.75:8080: connect: connection refused jx-greencapitaltrade-thor-pr-512: 20m Normal Pulled pod/preview-thor-7b9967997d-sltmb Container image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-9-SNAPSHOT" already present on machine jx-greencapitaltrade-thor-pr-512: 17s Warning BackOff pod/preview-thor-7b9967997d-sltmb Back-off restarting failed container thor in pod preview-thor-7b9967997d-sltmb_jx-greencapitaltrade-thor-pr-512(6f2a7f08-13a5-4e60-afc0-34c015c6d601) jx-greencapitaltrade-thor-pr-512: 25m Normal SuccessfulCreate replicaset/preview-thor-7b9967997d Created pod: preview-thor-7b9967997d-sltmb jx-greencapitaltrade-thor-pr-512: 25m Normal Scheduled pod/preview-thor-9c75d5489-g8qzj Successfully assigned jx-greencapitaltrade-thor-pr-512/preview-thor-9c75d5489-g8qzj to ip-10-0-62-105.ap-south-1.compute.internal jx-greencapitaltrade-thor-pr-512: 25m Normal Pulling pod/preview-thor-9c75d5489-g8qzj Pulling image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-8-SNAPSHOT" jx-greencapitaltrade-thor-pr-512: 25m Normal Pulled pod/preview-thor-9c75d5489-g8qzj Successfully pulled image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-8-SNAPSHOT" in 21.977s (21.977s including waiting). Image size: 162104989 bytes. jx-greencapitaltrade-thor-pr-512: 23m Normal Created pod/preview-thor-9c75d5489-g8qzj Created container: thor jx-greencapitaltrade-thor-pr-512: 23m Normal Started pod/preview-thor-9c75d5489-g8qzj Started container thor jx-greencapitaltrade-thor-pr-512: 23m Warning Unhealthy pod/preview-thor-9c75d5489-g8qzj Readiness probe failed: Get "http://10.0.52.74:8080/_healthz": dial tcp 10.0.52.74:8080: connect: connection refused jx-greencapitaltrade-thor-pr-512: 23m Normal Pulled pod/preview-thor-9c75d5489-g8qzj Container image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-8-SNAPSHOT" already present on machine jx-greencapitaltrade-thor-pr-512: 14s Warning BackOff pod/preview-thor-9c75d5489-g8qzj Back-off restarting failed container thor in pod preview-thor-9c75d5489-g8qzj_jx-greencapitaltrade-thor-pr-512(bb428d40-88ba-4c93-9272-603cdecae472) jx-greencapitaltrade-thor-pr-512: 27m Warning BackOff pod/preview-thor-9c75d5489-qfxrx Back-off restarting failed container thor in pod preview-thor-9c75d5489-qfxrx_jx-greencapitaltrade-thor-pr-512(737934c0-279a-47a9-8857-2b12daaa231b) jx-greencapitaltrade-thor-pr-512: 25m Normal SuccessfulCreate replicaset/preview-thor-9c75d5489 Created pod: preview-thor-9c75d5489-g8qzj jx-greencapitaltrade-thor-pr-512: 25m Normal Killing pod/redis-master-0 Stopping container redis jx-greencapitaltrade-thor-pr-512: 25m Normal Scheduled pod/redis-master-0 Successfully assigned jx-greencapitaltrade-thor-pr-512/redis-master-0 to ip-10-0-62-105.ap-south-1.compute.internal jx-greencapitaltrade-thor-pr-512: 25m Normal Pulling pod/redis-master-0 Pulling image "docker.io/bitnamilegacy/redis:7.2.3-debian-11-r2" jx-greencapitaltrade-thor-pr-512: 24m Normal Pulled pod/redis-master-0 Successfully pulled image "docker.io/bitnamilegacy/redis:7.2.3-debian-11-r2" in 1m3.592s (1m3.592s including waiting). Image size: 38134309 bytes. jx-greencapitaltrade-thor-pr-512: 24m Normal Created pod/redis-master-0 Created container: redis jx-greencapitaltrade-thor-pr-512: 24m Normal Started pod/redis-master-0 Started container redis jx-greencapitaltrade-thor-pr-512: 25m Normal SuccessfulCreate statefulset/redis-master create Pod redis-master-0 in StatefulSet redis-master successful jx-greencapitaltrade-thor-pr-512: 0s Normal SuccessfulCreate job/jx-verify-gc-jobs-h8j7q Created pod: jx-verify-gc-jobs-h8j7q-wgbmx jx-greencapitaltrade-thor-pr-512: 0s Normal Scheduled pod/jx-verify-gc-jobs-h8j7q-wgbmx Successfully assigned jx-greencapitaltrade-thor-pr-512/jx-verify-gc-jobs-h8j7q-wgbmx to ip-10-0-52-177.ap-south-1.compute.internal jx-greencapitaltrade-thor-pr-512: 0s Normal Pulled pod/jx-verify-gc-jobs-h8j7q-wgbmx Container image "ghcr.io/jenkins-x/jx-verify:0.3.19" already present on machine jx-greencapitaltrade-thor-pr-512: 0s Normal Created pod/jx-verify-gc-jobs-h8j7q-wgbmx Created container: job jx-greencapitaltrade-thor-pr-512: 0s Normal Started pod/jx-verify-gc-jobs-h8j7q-wgbmx Started container job jx-greencapitaltrade-thor-pr-512: 0s Normal ScalingReplicaSet deployment/preview-thor Scaled down replica set preview-thor-9c75d5489 to 0 from 1 jx-greencapitaltrade-thor-pr-512: 0s Normal SuccessfulDelete replicaset/preview-thor-9c75d5489 Deleted pod: preview-thor-9c75d5489-g8qzj jx-greencapitaltrade-thor-pr-512: 0s Normal ScalingReplicaSet deployment/preview-thor Scaled up replica set preview-thor-786db5d65 to 1 from 0 jx-greencapitaltrade-thor-pr-512: 0s Normal SuccessfulCreate replicaset/preview-thor-786db5d65 Created pod: preview-thor-786db5d65-8rg2b jx-greencapitaltrade-thor-pr-512: 0s Normal Scheduled pod/preview-thor-786db5d65-8rg2b Successfully assigned jx-greencapitaltrade-thor-pr-512/preview-thor-786db5d65-8rg2b to ip-10-0-74-255.ap-south-1.compute.internal jx-greencapitaltrade-thor-pr-512: 0s Normal Pulling pod/preview-thor-786db5d65-8rg2b Pulling image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-11-SNAPSHOT" jx-greencapitaltrade-thor-pr-512: 0s Normal Completed job/jx-verify-gc-jobs-h8j7q Job completed jx-greencapitaltrade-thor-pr-512: 0s Normal Pulled pod/preview-thor-786db5d65-8rg2b Successfully pulled image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-11-SNAPSHOT" in 4.154s (4.154s including waiting). Image size: 162103260 bytes. jx-greencapitaltrade-thor-pr-512: 0s Normal Created pod/preview-thor-786db5d65-8rg2b Created container: thor jx-greencapitaltrade-thor-pr-512: 0s Normal Started pod/preview-thor-786db5d65-8rg2b Started container thor jx-greencapitaltrade-thor-pr-512: 0s Warning Unhealthy pod/preview-thor-786db5d65-8rg2b Readiness probe failed: Get "http://10.0.79.84:8080/_healthz": dial tcp 10.0.79.84:8080: connect: connection refused jx-greencapitaltrade-thor-pr-512: 0s Warning Unhealthy pod/preview-thor-786db5d65-8rg2b Readiness probe failed: Get "http://10.0.79.84:8080/_healthz": dial tcp 10.0.79.84:8080: connect: connection refused jx-greencapitaltrade-thor-pr-512: 0s Normal Pulled pod/preview-thor-786db5d65-8rg2b Container image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-11-SNAPSHOT" already present on machine jx-greencapitaltrade-thor-pr-512: 0s Normal Created pod/preview-thor-786db5d65-8rg2b Created container: thor jx-greencapitaltrade-thor-pr-512: 0s Normal Started pod/preview-thor-786db5d65-8rg2b Started container thor jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 0s Normal Pulled pod/preview-thor-786db5d65-8rg2b Container image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-11-SNAPSHOT" already present on machine jx-greencapitaltrade-thor-pr-512: 0s Normal Created pod/preview-thor-786db5d65-8rg2b Created container: thor jx-greencapitaltrade-thor-pr-512: 0s Normal Started pod/preview-thor-786db5d65-8rg2b Started container thor jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 0s Normal Pulled pod/preview-thor-786db5d65-8rg2b Container image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-11-SNAPSHOT" already present on machine jx-greencapitaltrade-thor-pr-512: 0s Normal Created pod/preview-thor-786db5d65-8rg2b Created container: thor jx-greencapitaltrade-thor-pr-512: 0s Normal Started pod/preview-thor-786db5d65-8rg2b Started container thor jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 0s Normal Pulled pod/preview-thor-786db5d65-8rg2b Container image "417755753340.dkr.ecr.ap-south-1.amazonaws.com/greencapitaltrade/thor:0.0.0-PR-512-11-SNAPSHOT" already present on machine jx-greencapitaltrade-thor-pr-512: 0s Normal Created pod/preview-thor-786db5d65-8rg2b Created container: thor jx-greencapitaltrade-thor-pr-512: 0s Normal NotTriggerScaleUp pod/postgresql-0 pod didn't trigger scale-up: 1 node(s) didn't match Pod's node affinity/selector, 1 max node group size reached jx-greencapitaltrade-thor-pr-512: 0s Warning FailedScheduling pod/postgresql-0 0/9 nodes are available: 9 node(s) didn't match Pod's node affinity/selector. preemption: 0/9 nodes are available: 9 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-7b9967997d-sltmb Back-off restarting failed container thor in pod preview-thor-7b9967997d-sltmb_jx-greencapitaltrade-thor-pr-512(6f2a7f08-13a5-4e60-afc0-34c015c6d601) jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/kafka-cluster-broker-0 Back-off restarting failed container kafka in pod kafka-cluster-broker-0_jx-greencapitaltrade-thor-pr-512(932ab3ee-c316-49de-b8a2-09e7568b29e3) jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-786db5d65-8rg2b Back-off restarting failed container thor in pod preview-thor-786db5d65-8rg2b_jx-greencapitaltrade-thor-pr-512(18197280-74d7-442b-b279-49b6454ce7f6) jx-greencapitaltrade-thor-pr-512: 60m Warning FailedScheduling pod/postgresql-0 0/8 nodes are available: 8 node(s) didn't match Pod's node affinity/selector. preemption: 0/8 nodes are available: 8 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 0s Normal NotTriggerScaleUp pod/postgresql-0 pod didn't trigger scale-up: 1 node(s) didn't match Pod's node affinity/selector, 1 max node group size reached jx-greencapitaltrade-thor-pr-512: 0s Warning FailedScheduling pod/postgresql-0 0/9 nodes are available: 9 node(s) didn't match Pod's node affinity/selector. preemption: 0/9 nodes are available: 9 Preemption is not helpful for scheduling. jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/preview-thor-7b9967997d-sltmb Back-off restarting failed container thor in pod preview-thor-7b9967997d-sltmb_jx-greencapitaltrade-thor-pr-512(6f2a7f08-13a5-4e60-afc0-34c015c6d601) jx-greencapitaltrade-thor-pr-512: 0s Warning BackOff pod/kafka-cluster-broker-0 Back-off restarting failed container kafka in pod kafka-cluster-broker-0_jx-greencapitaltrade-thor-pr-512(932ab3ee-c316-49de-b8a2-09e7568b29e3) detected a failure on the preview environment jx-greencapitaltrade-thor-pr-512 so looking for an erroring pod found pod kafka-cluster-broker-0 and container kafka in namespace jx-greencapitaltrade-thor-pr-512 in state Running with 10 restarts error: failed to helmfile sync: failed to sync helmfile due to a timeout, pod kafka-cluster-broker-0 has failed with the logs: failed pod kafka-cluster-broker-0 in namespace jx-greencapitaltrade-thor-pr-512: removed directory '/tmp/hsperfdata_kafka'removed '/tmp/kafka/clients.truststore.p12'removed '/tmp/kafka/cluster.keystore.p12'removed '/tmp/kafka/cluster.truststore.p12'removed directory '/tmp/kafka'removed '/tmp/kafka-agent.properties'removed '/tmp/strimzi.properties'STRIMZI_BROKER_ID=0Preparing truststore for replication listenerAdding /opt/kafka/cluster-ca-certs/ca.crt to truststore /tmp/kafka/cluster.truststore.p12 with alias caCertificate was added to keystorePreparing truststore for replication listener is completeLooking for the CA matching the server certificateCA matching the server certificate found: /opt/kafka/cluster-ca-certs/ca.crtPreparing keystore for replication and clienttls listenerPreparing keystore for replication and clienttls listener is completePreparing truststore for client authenticationAdding /opt/kafka/client-ca-certs/ca.crt to truststore /tmp/kafka/clients.truststore.p12 with alias caCertificate was added to keystorePreparing truststore for client authentication is completeStarting Kafka with configuration:############################################################# This file is automatically generated by the Strimzi Cluster Operator# Any changes to this file will be ignored and overwritten!####################################################################### Node ID##########node.id=0########### KRaft configuration##########process.roles=brokercontroller.listener.names=CONTROLPLANE-9090controller.quorum.voters=1@kafka-cluster-controller-1.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9090########### KRaft metadata log dir configuration##########metadata.log.dir=/var/lib/kafka/data-0/kafka-log0########### Kafka message logs configuration##########log.dirs=/var/lib/kafka/data-0/kafka-log0########### Control Plane listener##########listener.name.controlplane-9090.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12listener.name.controlplane-9090.ssl.keystore.password=[hidden]listener.name.controlplane-9090.ssl.keystore.type=PKCS12listener.name.controlplane-9090.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12listener.name.controlplane-9090.ssl.truststore.password=[hidden]listener.name.controlplane-9090.ssl.truststore.type=PKCS12listener.name.controlplane-9090.ssl.client.auth=required########### Replication listener##########listener.name.replication-9091.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12listener.name.replication-9091.ssl.keystore.password=[hidden]listener.name.replication-9091.ssl.keystore.type=PKCS12listener.name.replication-9091.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12listener.name.replication-9091.ssl.truststore.password=[hidden]listener.name.replication-9091.ssl.truststore.type=PKCS12listener.name.replication-9091.ssl.client.auth=required########### Listener configuration: PLAIN-9092##########listener.name.plain-9092.scram-sha-512.sasl.jaas.config=[hidden]listener.name.plain-9092.sasl.enabled.mechanisms=SCRAM-SHA-512########### Common listener configuration##########listener.security.protocol.map=CONTROLPLANE-9090:SSL,REPLICATION-9091:SSL,PLAIN-9092:SASL_PLAINTEXTlisteners=REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092inter.broker.listener.name=REPLICATION-9091advertised.listeners=REPLICATION-9091://kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc:9091,PLAIN-9092://kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9092sasl.enabled.mechanisms=ssl.endpoint.identification.algorithm=HTTPS########### Authorization##########authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizersuper.users=User:CN=kafka-cluster-kafka,O=io.strimzi;User:CN=kafka-cluster-entity-topic-operator,O=io.strimzi;User:CN=kafka-cluster-entity-user-operator,O=io.strimzi;User:CN=kafka-cluster-kafka-exporter,O=io.strimzi;User:CN=kafka-cluster-cruise-control,O=io.strimzi;User:CN=cluster-operator,O=io.strimzi########### Config providers########### Configuration providers configured by the user and by Strimziconfig.providers=strimzienv,strimzifile,strimzidirconfig.providers.strimzienv.class=org.apache.kafka.common.config.provider.EnvVarConfigProviderconfig.providers.strimzienv.param.allowlist.pattern=.*config.providers.strimzifile.class=org.apache.kafka.common.config.provider.FileConfigProviderconfig.providers.strimzifile.param.allowed.paths=/opt/kafkaconfig.providers.strimzidir.class=org.apache.kafka.common.config.provider.DirectoryConfigProviderconfig.providers.strimzidir.param.allowed.paths=/opt/kafka########### User provided configuration##########min.insync.replicas=1auto.create.topics.enable=truedefault.replication.factor=1log.retention.hours=24num.partitions=1offsets.topic.replication.factor=1transaction.state.log.min.isr=1transaction.state.log.replication.factor=1Making sure the Kraft storage is formatted with cluster ID uk-VxOKlSKqKClqJLo-AlQ and metadata version 4.1-IV02026-05-15 14:04:18 INFO [main] Log4jControllerRegistration$:33 - Registered `kafka:type=kafka.Log4jController` MBean2026-05-15 14:04:19 INFO [main] AbstractConfig:380 - KafkaConfig values: add.partitions.to.txn.retry.backoff.max.ms = 100 add.partitions.to.txn.retry.backoff.ms = 20 advertised.listeners = REPLICATION-9091://kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc:9091,PLAIN-9092://kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9092 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = org.apache.kafka.metadata.authorizer.StandardAuthorizer auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 0 broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.gzip.level = -1 compression.lz4.level = 9 compression.type = producer compression.zstd.level = 3 connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 controlled.shutdown.enable = true controller.listener.names = CONTROLPLANE-9090 controller.performance.always.log.threshold.ms = 2000 controller.performance.sample.period.ms = 60000 controller.quorum.append.linger.ms = 25 controller.quorum.bootstrap.servers = [] controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [1@kafka-cluster-controller-1.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9090] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.consumer.assignors = [uniform, range] group.consumer.heartbeat.interval.ms = 5000 group.consumer.max.heartbeat.interval.ms = 15000 group.consumer.max.session.timeout.ms = 60000 group.consumer.max.size = 2147483647 group.consumer.migration.policy = bidirectional group.consumer.min.heartbeat.interval.ms = 5000 group.consumer.min.session.timeout.ms = 45000 group.consumer.regex.refresh.interval.ms = 600000 group.consumer.session.timeout.ms = 45000 group.coordinator.append.linger.ms = 5 group.coordinator.rebalance.protocols = [classic, consumer, streams] group.coordinator.threads = 4 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 group.share.assignors = [simple] group.share.delivery.count.limit = 5 group.share.enable = false group.share.heartbeat.interval.ms = 5000 group.share.max.heartbeat.interval.ms = 15000 group.share.max.record.lock.duration.ms = 60000 group.share.max.session.timeout.ms = 60000 group.share.max.share.sessions = 2000 group.share.max.size = 200 group.share.min.heartbeat.interval.ms = 5000 group.share.min.record.lock.duration.ms = 15000 group.share.min.session.timeout.ms = 45000 group.share.partition.max.record.locks = 2000 group.share.persister.class.name = org.apache.kafka.server.share.persister.DefaultStatePersister group.share.record.lock.duration.ms = 30000 group.share.session.timeout.ms = 45000 group.streams.heartbeat.interval.ms = 5000 group.streams.max.heartbeat.interval.ms = 15000 group.streams.max.session.timeout.ms = 60000 group.streams.max.size = 2147483647 group.streams.max.standby.replicas = 2 group.streams.min.heartbeat.interval.ms = 5000 group.streams.min.session.timeout.ms = 45000 group.streams.num.standby.replicas = 0 group.streams.session.timeout.ms = 45000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = REPLICATION-9091 internal.metadata.delete.delay.millis = 60000 internal.metadata.log.segment.bytes = null internal.metadata.max.batch.size.in.bytes = 8388608 internal.metadata.max.fetch.size.in.bytes = 8388608 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 listener.security.protocol.map = CONTROLPLANE-9090:SSL,REPLICATION-9091:SSL,PLAIN-9092:SASL_PLAINTEXT listeners = REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dir.failure.timeout.ms = 30000 log.dirs = /var/lib/kafka/data-0/kafka-log0 log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.initial.task.delay.ms = 30000 log.local.retention.bytes = -2 log.local.retention.ms = -2 log.message.timestamp.after.max.ms = 3600000 log.message.timestamp.before.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 24 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 max.request.partition.size.limit = 2000 message.max.bytes = 1048588 metadata.log.dir = /var/lib/kafka/data-0/kafka-log0 metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.max.snapshot.interval.ms = 3600000 metadata.log.segment.bytes = 1073741824 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = 104857600 metadata.max.retention.ms = 604800000 metric.reporters = [org.apache.kafka.common.metrics.JmxReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 0 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 2 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.timeout.ms = 5000 o ffsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [broker] producer.id.expiration.check.interval.ms = 600000 producer.id.expiration.ms = 86400000 producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.fetch.max.wait.ms = 500 remote.list.offsets.request.timeout.ms = 30000 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.copier.thread.pool.size = 10 remote.log.manager.copy.max.bytes.per.second = 9223372036854775807 remote.log.manager.copy.quota.window.num = 11 remote.log.manager.copy.quota.window.size.seconds = 1 remote.log.manager.expiration.thread.pool.size = 10 remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807 remote.log.manager.fetch.quota.window.num = 11 remote.log.manager.fetch.quota.window.size.seconds = 1 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 2 remote.log.metadata.custom.metadata.max.bytes = 128 remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = rlmm.config. remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = rsm.config. remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = GSSAPI sasl.oauthbearer.assertion.algorithm = RS256 sasl.oauthbearer.assertion.claim.aud = null sasl.oauthbearer.assertion.claim.exp.seconds = 300 sasl.oauthbearer.assertion.claim.iss = null sasl.oauthbearer.assertion.claim.jti.include = false sasl.oauthbearer.assertion.claim.nbf.seconds = 60 sasl.oauthbearer.assertion.claim.sub = null sasl.oauthbearer.assertion.file = null sasl.oauthbearer.assertion.private.key.file = null sasl.oauthbearer.assertion.private.key.passphrase = null sasl.oauthbearer.assertion.template.file = null sasl.oauthbearer.client.credentials.client.id = null sasl.oauthbearer.client.credentials.client.secret = null sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoin t.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.jwt.retriever.class = class org.apache.kafka.common.security.oauthbearer.DefaultJwtRetriever sasl.oauthbearer.jwt.validator.class = class org.apache.kafka.common.security.oauthbearer.DefaultJwtValidator sasl.oauthbearer.scope = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = PLAINTEXT security.providers = null server.max.startup.time.ms = 9223372036854775807 share.coordinator.append.linger.ms = 5 share.coordinator.cold.partition.snapshot.interval.ms = 300000 share.coordinator.load.buffer.size = 5242880 share.coordinator.snapshot.update.records.per.snapshot = 500 share.coordinator.state.topic.compression.codec = 0 share.coordinator.state.topic.min.isr = 2 share.coordinator.state.topic.num.partitions = 50 share.coordinator.state.topic.prune.interval.ms = 300000 share.coordinator.state.topic.replication.factor = 3 share.coordinator.state.topic.segment.bytes = 104857600 share.coordinator.threads = 1 share.coordinator.write.timeout.ms = 5000 share.fetch.purgatory.purge.interval.requests = 1000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.allow.dn.changes = false ssl.allow.san.changes = false ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = HTTPS ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS telemetry.max.bytes = 1048576 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.partition.verification.enable = true transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transaction.two.phase.commit.enable = false transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false unclean.leader.election.interval.ms = 300000 unstable.api.versions.enable = false unstable.feature.versions.enable = falseAll of the log directories are already formatted.KRaft storage formatting is doneRemoving quorum-state filePreparing Kafka Agent configuration+ exec /usr/bin/tini -w -e 143 -- /opt/kafka/bin/kafka-server-start.sh /tmp/strimzi.properties2026-05-15 14:04:23 INFO [main] KafkaAgent:342 - Starting KafkaAgent with sslKeyStorePath=/tmp/kafka/cluster.keystore.p12 and sslTrustStore=/tmp/kafka/cluster.truststore.p122026-05-15 14:04:24 INFO [main] Server:555 - jetty-12.0.22; built: 2025-06-02T15:25:31.946Z; git: 335c9ab44a5591f0ea941bf350e139b8c4f5537c; jvm 17.0.16+8-LTS2026-05-15 14:04:24 INFO [main] ContextHandler:764 - Started oejsh.ContextHandler@1ef6d34c{/v1/broker-state,/v1/broker-state,b=null,a=AVAILABLE,h=iska.KafkaAgent$@11bb571c{STARTED}}2026-05-15 14:04:24 INFO [main] ContextHandler:764 - Started oejsh.ContextHandler@7c51f34b{/v1/ready,/v1/ready,b=null,a=AVAILABLE,h=iska.KafkaAgent$@5495333e{STARTED}}2026-05-15 14:04:25 INFO [main] SslContextFactory:337 - x509=X509@2766ca9d(kafka-cluster-broker-0,h=[kafka-cluster-kafka-brokers. jx-greencapitaltrade-thor-pr-512.svc, kafka-cluster-kafka-bootstrap.jx-greencapitaltrade-thor-pr-512.svc.cluster.local, kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc, kafka-cluster-kafka-bootstrap, kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512, kafka-cluster-kafka-brokers, kafka-cluster-kafka-bootstrap.jx-greencapitaltrade-thor-pr-512, kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local, kafka-cluster-kafka-bootstrap.jx-greencapitaltrade-thor-pr-512.svc, kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local, kafka-cluster-kafka],a=[],w=[]) for Server@5460cf3a[provider=null,keyStore=file:///tmp/kafka/cluster.keystore.p12,trustStore=file:///tmp/kafka/cluster.truststore.p12]2026-05-15 14:04:25 INFO [main] AbstractConnector:326 - Started ServerConnector@5d6e3dfe{SSL, (ssl, http/1.1)}{0.0.0.0:8443}2026-05-15 14:04:25 INFO [main] AbstractConnector:326 - Started ServerConnector@70d52b28{HTTP/1.1, (http/1.1)}{localhost:8080}2026-05-15 14:04:25 INFO [main] Server:612 - Started oejs.Server@4fa1c212{STARTING}[12.0.22,sto=30000] @4790ms2026-05-15 14:04:25 INFO [main] KafkaAgent:120 - Starting metrics registry2026-05-15 14:04:26 INFO [main] KafkaAgent:156 - Found class org.apache.kafka.server.metrics.KafkaYammerMetrics for Kafka 3.3 and newer.2026-05-15 14:04:26 INFO [main] Log4jControllerRegistration$:33 - Registered `kafka:type=kafka.Log4jController` MBean2026-05-15 14:04:28 INFO [main] LoggingSignalHandler:72 - Registered signal handlers for TERM, INT, HUP2026-05-15 14:04:28 INFO [main] BrokerServer:69 - [BrokerServer id=0] Transition from SHUTDOWN to STARTING2026-05-15 14:04:28 INFO [main] SharedServer:69 - [SharedServer id=0] Starting SharedServer2026-05-15 14:04:28 INFO [main] LogLoader:477 - [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Recovering unflushed segment 7250. 0 recovered for __cluster_metadata-0.2026-05-15 14:04:28 INFO [main] UnifiedLog:2440 - [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Loading producer state till offset 72502026-05-15 14:04:28 INFO [main] UnifiedLog:2462 - [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Reloading from producer snapshot and rebuilding producer state from offset 72502026-05-15 14:04:28 INFO [main] SnapshotFile:48 - Deleted producer state snapshot /var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/00000000000000008545.snapshot2026-05-15 14:04:28 INFO [main] ProducerStateManager:441 - [ProducerStateManager partition=__cluster_metadata-0] Wrote producer snapshot at offset 7250 with 0 producer ids in 10 ms.2026-05-15 14:04:28 INFO [main] UnifiedLog:2498 - [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Producer state recovery took 0ms for snapshot load and 11ms for segment recovery from offset 72502026-05-15 14:04:28 INFO [main] ProducerStateManager:441 - [ProducerStateManager partition=__cluster_metadata-0] Wrote producer snapshot at offset 8545 with 0 producer ids in 90 ms.2026-05-15 14:04:28 INFO [main] UnifiedLog:2440 - [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Loading producer state till offset 85452026-05-15 14:04:28 INFO [main] UnifiedLog:2462 - [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Reloading from producer snapshot and rebuilding producer state from offset 85452026-05-15 14:04:28 INFO [main] SnapshotFile:48 - Deleted producer state snapshot /var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/00000000000000007250.snapshot2026-05-15 14:04:28 INFO [main] ProducerStateManager:302 - [ProducerStateManager partition=__cluster_metadata-0] Loading producer state from snapshot file 'SnapshotFile(offset=8545, file=/var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/00000000000000008545.snapshot)'2026-05-15 14:04:28 INFO [main] UnifiedLog:2498 - [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafk a-log0] Producer state recovery took 87ms for snapshot load and 0ms for segment recovery from offset 85452026-05-15 14:04:29 INFO [main] KafkaMetadataLog$:580 - Initialized snapshots with IDs SortedSet(OffsetAndEpoch[offset=7250, epoch=1]) from /var/lib/kafka/data-0/kafka-log0/__cluster_metadata-02026-05-15 14:04:29 INFO [raft-expiration-reaper] TimingWheelExpirationService$ExpiredOperationReaper:133 - [raft-expiration-reaper]: Starting2026-05-15 14:04:29 INFO [main] KafkaRaftClient:503 - [RaftManager id=0] Reading KRaft snapshot and log as part of the initialization2026-05-15 14:04:29 INFO [main] KRaftControlRecordStateMachine:276 - [RaftManager id=0] Loading snapshot (OffsetAndEpoch[offset=7250, epoch=1]) since log start offset (7250) is greater than the internal listener's next offset (-1)2026-05-15 14:04:29 INFO [main] KafkaRaftClient:505 - [RaftManager id=0] Starting voters are VoterSet(voters={1=VoterNode(voterKey=ReplicaKey(id=1, directoryId=), listeners=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=kafka-cluster-controller-1.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local/10.0.76.108:9090}), supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:0])})2026-05-15 14:04:29 INFO [main] KafkaRaftClient:531 - [RaftManager id=0] Starting request manager with static voters: [kafka-cluster-controller-1.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9090 (id: 1 rack: null isFenced: false)]2026-05-15 14:04:30 WARN [main] QuorumState:158 - [RaftManager id=0] Epoch from quorum store file (/var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/quorum-state) is 0, which is smaller than last written epoch 1 in the log2026-05-15 14:04:30 INFO [main] QuorumState:732 - [RaftManager id=0] Attempting durable transition to UnattachedState(epoch=1, leaderId=OptionalInt.empty, votedKey=Optional.empty, voters=[1], electionTimeoutMs=1092, highWatermark=Optional.empty) from null2026-05-15 14:04:30 INFO [main] QuorumState:749 - [RaftManager id=0] Completed transition to UnattachedState(epoch=1, leaderId=OptionalInt.empty, votedKey=Optional.empty, voters=[1], electionTimeoutMs=1092, highWatermark=Optional.empty) from null2026-05-15 14:04:30 INFO [kafka-0-raft-outbound-request-thread] KafkaNetworkChannel$SendThread:133 - [kafka-0-raft-outbound-request-thread]: Starting2026-05-15 14:04:30 INFO [kafka-0-raft-io-thread] KafkaRaftClientDriver:133 - [kafka-0-raft-io-thread]: Starting2026-05-15 14:04:30 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:30 INFO [main] BrokerServer:69 - [BrokerServer id=0] Starting broker2026-05-15 14:04:30 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:30 INFO [broker-0-ThrottledChannelReaper-Request] ClientQuotaManager$ThrottledChannelReaper:133 - [broker-0-ThrottledChannelReaper-Request]: Starting2026-05-15 14:04:30 INFO [broker-0-ThrottledChannelReaper-Fetch] ClientQuotaManager$ThrottledChannelReaper:133 - [broker-0-ThrottledChannelReaper-Fetch]: Starting2026-05-15 14:04:30 INFO [broker-0-ThrottledChannelReaper-ControllerMutation] ClientQuotaManager$ThrottledChannelReaper:133 - [broker-0-ThrottledChannelReaper-ControllerMutation]: Starting2026-05-15 14:04:30 INFO [broker-0-ThrottledChannelReaper-Produce] ClientQuotaManager$ThrottledChannelReaper:133 - [broker-0-ThrottledChannelReaper-Produce]: Starting2026-05-15 14:04:30 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:30 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching u p because we still don't know the high water mark yet.2026-05-15 14:04:30 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:30 INFO [main] AbstractConfig:380 - KafkaConfig values: add.partitions.to.txn.retry.backoff.max.ms = 100 add.partitions.to.txn.retry.backoff.ms = 20 advertised.listeners = REPLICATION-9091://kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc:9091,PLAIN-9092://kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9092 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = org.apache.kafka.metadata.authorizer.StandardAuthorizer auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 0 broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.gzip.level = -1 compression.lz4.level = 9 compression.type = producer compression.zstd.level = 3 connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 controlled.shutdown.enable = true controller.listener.names = CONTROLPLANE-9090 controller.performance.always.log.threshold.ms = 2000 controller.performance.sample.period.ms = 60000 controller.quorum.append.linger.ms = 25 controller.quorum.bootstrap.servers = [] controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [1@kafka-cluster-controller-1.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9090] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.consumer.assignors = [uniform, range] group.consumer.heartbeat.interval.ms = 5000 group.consumer.max.heartbeat.interval.ms = 15000 group.consumer.max.session.timeout.ms = 60000 group.consumer.max.size = 2147483647 group.consumer.migration.policy = bidirectional group.consumer.min.heartbeat.interval.ms = 5000 group.consumer.min.session.timeout.ms = 45000 group.consumer.regex.refresh.interval.ms = 600000 group.consumer.session.timeout.ms = 45000 group.coordinator.append.linger.ms = 5 group.coordinator.rebalance.protocols = [classic, consumer, streams] group.coordinator.threads = 4 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 group.share.assignors = [simple] group.share.delivery.count.limit = 5 group.share.enable = false group.share.heartbeat.interval.ms = 5000 group.share.max.heartbeat.interval.ms = 15000 group.share.max.record.lock.duration.ms = 60000 group.share.max.session.timeout.ms = 60000 group.share.max.share.sessions = 2000 group.share.max.size = 200 group.share.min.heartbeat.interval.ms = 5000 group.share.min.record.lock.duration.ms = 15000 group.share.min.session.timeout.ms = 45000 group.share.partition.max.record.locks = 2000 group.share.persister.class.name = org.apache.kafka.server.share.persister.DefaultStatePersister group.share.record.lock.duration.ms = 30000 group.share.session.timeout.ms = 45000 group.streams.heartbeat.interval.ms = 5000 group.streams.max.heartbeat.interval.ms = 15000 group.streams.max.session.timeout.ms = 60000 group.streams.max.size = 2147483647 group.streams.max.standby.replicas = 2 group.streams.min.heartbeat.interval.ms = 5000 group.streams.min.session.timeout.ms = 45000 group.streams.num.standby.replicas = 0 group.streams.session.timeout.ms = 45000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = REPLICATION-9091 internal.metadata.delete.delay.millis = 60000 internal.metadata.log.segment.bytes = null internal.metadata.max.batch.size.in.bytes = 8388608 internal.metadata.max.fetch.size.in.bytes = 8388608 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 listener.security.protocol.map = CONTROLPLANE-9090:SSL,REPLICATION-9091:SSL,PLAIN-9092:SASL_PLAINTEXT listeners = REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dir.failure.timeout.ms = 30000 log.dirs = /var/lib/kafka/data-0/kafka-log0 log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.initial.task.delay.ms = 30000 log.local.retention.bytes = -2 log.local.retention.ms = -2 log.message.timestamp.after.max.ms = 3600000 log.message.timestamp.before.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 24 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 max.request.partition.size.limit = 2000 message.max.bytes = 1048588 metadata.log.dir = /var/lib/kafka/data-0/kafka-log0 metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.max.snapshot.interval.ms = 3600000 metadata.log.segment.bytes = 1073741824 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = 104857600 metadata.max.retention.ms = 604800000 metric.reporters = [org.apache.kafka.common.metrics.JmxReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 0 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 2 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [broker] producer.id.expiration.check.interval.ms = 600000 producer.id.expiration.ms = 86400000 producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.fetch.max.wait.ms = 500 remote.list.offsets.request.timeout.ms = 30000 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.cop ier.thread.pool.size = 10 remote.log.manager.copy.max.bytes.per.second = 9223372036854775807 remote.log.manager.copy.quota.window.num = 11 remote.log.manager.copy.quota.window.size.seconds = 1 remote.log.manager.expiration.thread.pool.size = 10 remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807 remote.log.manager.fetch.quota.window.num = 11 remote.log.manager.fetch.quota.window.size.seconds = 1 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 2 remote.log.metadata.custom.metadata.max.bytes = 128 remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = rlmm.config. remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = rsm.config. remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = GSSAPI sasl.oauthbearer.assertion.algorithm = RS256 sasl.oauthbearer.assertion.claim.aud = null sasl.oauthbearer.assertion.claim.exp.seconds = 300 sasl.oauthbearer.assertion.claim.iss = null sasl.oauthbearer.assertion.claim.jti.include = false sasl.oauthbearer.assertion.claim.nbf.seconds = 60 sasl.oauthbearer.assertion.claim.sub = null sasl.oauthbearer.assertion.file = null sasl.oauthbearer.assertion.private.key.file = null sasl.oauthbearer.assertion.private.key.passphrase = null sasl.oauthbearer.assertion.template.file = null sasl.oauthbearer.client.credentials.client.id = null sasl.oauthbearer.client.credentials.client.secret = null sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.jwt.retriever.class = class org.apache.kafka.common.security.oauthbearer.DefaultJwtRetriever sasl.oauthbearer.jwt.validator.class = class org.apache.kafka.common.security.oauthbearer.DefaultJwtValidator sasl.oauthbearer.scope = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = PLAINTEXT security.providers = null server.max.startup.time.ms = 9223372036854775807 share.coordinator.append.linger.ms = 5 share.c oordinator.cold.partition.snapshot.interval.ms = 300000 share.coordinator.load.buffer.size = 5242880 share.coordinator.snapshot.update.records.per.snapshot = 500 share.coordinator.state.topic.compression.codec = 0 share.coordinator.state.topic.min.isr = 2 share.coordinator.state.topic.num.partitions = 50 share.coordinator.state.topic.prune.interval.ms = 300000 share.coordinator.state.topic.replication.factor = 3 share.coordinator.state.topic.segment.bytes = 104857600 share.coordinator.threads = 1 share.coordinator.write.timeout.ms = 5000 share.fetch.purgatory.purge.interval.requests = 1000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.allow.dn.changes = false ssl.allow.san.changes = false ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = HTTPS ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS telemetry.max.bytes = 1048576 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.partition.verification.enable = true transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transaction.two.phase.commit.enable = false transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false unclean.leader.election.interval.ms = 300000 unstable.api.versions.enable = false unstable.feature.versions.enable = false2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:31 INFO [main] AbstractConfig:380 - KafkaConfig values: add.partitions.to.txn.retry.backoff.max.ms = 100 add.partitions.to.txn.retry.backoff.ms = 20 advertised.listeners = REPLICATION-9091://kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc:9091,PLAIN-9092://kafka-cluster-broker-0.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9092 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = org.apache.kafka.metadata.authorizer.StandardAuthorizer auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 0 broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.gzip.level = -1 compression.lz4.level = 9 compression.type = producer compression.zstd.level = 3 connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 controlled.shutdown.enable = true controller.listener.names = CONTROLPLANE-9090 controller.performance.always.log.threshold.ms = 2000 controller.performance.sample.period.ms = 60000 controller.quorum.append.linger.ms = 25 controller.quorum.bootstrap.servers = [] controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [1@kafka-cluster-co ntroller-1.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc.cluster.local:9090] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.consumer.assignors = [uniform, range] group.consumer.heartbeat.interval.ms = 5000 group.consumer.max.heartbeat.interval.ms = 15000 group.consumer.max.session.timeout.ms = 60000 group.consumer.max.size = 2147483647 group.consumer.migration.policy = bidirectional group.consumer.min.heartbeat.interval.ms = 5000 group.consumer.min.session.timeout.ms = 45000 group.consumer.regex.refresh.interval.ms = 600000 group.consumer.session.timeout.ms = 45000 group.coordinator.append.linger.ms = 5 group.coordinator.rebalance.protocols = [classic, consumer, streams] group.coordinator.threads = 4 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 group.share.assignors = [simple] group.share.delivery.count.limit = 5 group.share.enable = false group.share.heartbeat.interval.ms = 5000 group.share.max.heartbeat.interval.ms = 15000 group.share.max.record.lock.duration.ms = 60000 group.share.max.session.timeout.ms = 60000 group.share.max.share.sessions = 2000 group.share.max.size = 200 group.share.min.heartbeat.interval.ms = 5000 group.share.min.record.lock.duration.ms = 15000 group.share.min.session.timeout.ms = 45000 group.share.partition.max.record.locks = 2000 group.share.persister.class.name = org.apache.kafka.server.share.persister.DefaultStatePersister group.share.record.lock.duration.ms = 30000 group.share.session.timeout.ms = 45000 group.streams.heartbeat.interval.ms = 5000 group.streams.max.heartbeat.interval.ms = 15000 group.streams.max.session.timeout.ms = 60000 group.streams.max.size = 2147483647 group.streams.max.standby.replicas = 2 group.streams.min.heartbeat.interval.ms = 5000 group.streams.min.session.timeout.ms = 45000 group.streams.num.standby.replicas = 0 group.streams.session.timeout.ms = 45000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = REPLICATION-9091 internal.metadata.delete.delay.millis = 60000 internal.metadata.log.segment.bytes = null internal.metadata.max.batch.size.in.bytes = 8388608 internal.metadata.max.fetch.size.in.bytes = 8388608 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 listener.security.protocol.map = CONTROLPLANE-9090:SSL,REPLICATION-9091:SSL,PLAIN-9092:SASL_PLAINTEXT listeners = REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dir.failure.timeout.ms = 30000 log.dirs = /var/lib/kafka/data-0/kafka-log0 log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.initial.task.delay.ms = 30000 log.local.retention.bytes = -2 log.local.retention.ms = -2 log.message.timestamp.after.max.ms = 3600000 log.message.timestamp.before.max.ms = 922337203685477580 7 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 24 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 max.request.partition.size.limit = 2000 message.max.bytes = 1048588 metadata.log.dir = /var/lib/kafka/data-0/kafka-log0 metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.max.snapshot.interval.ms = 3600000 metadata.log.segment.bytes = 1073741824 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = 104857600 metadata.max.retention.ms = 604800000 metric.reporters = [org.apache.kafka.common.metrics.JmxReporter] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 0 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 2 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [broker] producer.id.expiration.check.interval.ms = 600000 producer.id.expiration.ms = 86400000 producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.fetch.max.wait.ms = 500 remote.list.offsets.request.timeout.ms = 30000 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.copier.thread.pool.size = 10 remote.log.manager.copy.max.bytes.per.second = 9223372036854775807 remote.log.manager.copy.quota.window.num = 11 remote.log.manager.copy.quota.window.size.seconds = 1 remote.log.manager.expiration.thread.pool.size = 10 remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807 remote.log.manager.fetch.quota.window.num = 11 remote.log.manager.fetch.quota.window.size.seconds = 1 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 2 remote.log.metadata.custom.metadata.max.bytes = 128 remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = rlmm.config. remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = rsm.config. remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.princip al.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = GSSAPI sasl.oauthbearer.assertion.algorithm = RS256 sasl.oauthbearer.assertion.claim.aud = null sasl.oauthbearer.assertion.claim.exp.seconds = 300 sasl.oauthbearer.assertion.claim.iss = null sasl.oauthbearer.assertion.claim.jti.include = false sasl.oauthbearer.assertion.claim.nbf.seconds = 60 sasl.oauthbearer.assertion.claim.sub = null sasl.oauthbearer.assertion.file = null sasl.oauthbearer.assertion.private.key.file = null sasl.oauthbearer.assertion.private.key.passphrase = null sasl.oauthbearer.assertion.template.file = null sasl.oauthbearer.client.credentials.client.id = null sasl.oauthbearer.client.credentials.client.secret = null sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.jwt.retriever.class = class org.apache.kafka.common.security.oauthbearer.DefaultJwtRetriever sasl.oauthbearer.jwt.validator.class = class org.apache.kafka.common.security.oauthbearer.DefaultJwtValidator sasl.oauthbearer.scope = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = PLAINTEXT security.providers = null server.max.startup.time.ms = 9223372036854775807 share.coordinator.append.linger.ms = 5 share.coordinator.cold.partition.snapshot.interval.ms = 300000 share.coordinator.load.buffer.size = 5242880 share.coordinator.snapshot.update.records.per.snapshot = 500 share.coordinator.state.topic.compression.codec = 0 share.coordinator.state.topic.min.isr = 2 share.coordinator.state.topic.num.partitions = 50 share.coordinator.state.topic.prune.interval.ms = 300000 share.coordinator.state.topic.replication.factor = 3 share.coordinator.state.topic.segment.bytes = 104857600 share.coordinator.threads = 1 share.coordinator.write.timeout.ms = 5000 share.fetch.purgatory.purge.interval.requests = 1000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.allow.dn.changes = false ssl.allow.san.changes = false ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = HTTPS ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS telemetry.max.bytes = 1048576 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.partition.verification.enable = true transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.par titions = 50 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transaction.two.phase.commit.enable = false transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false unclean.leader.election.interval.ms = 300000 unstable.api.versions.enable = false unstable.feature.versions.enable = false2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:31 INFO [main] BrokerServer:57 - [BrokerServer id=0] Waiting for controller quorum voters future2026-05-15 14:04:31 INFO [main] BrokerServer:60 - [BrokerServer id=0] Finished waiting for controller quorum voters future2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:31 INFO [broker-0-to-controller-forwarding-channel-manager] NodeToControllerRequestThread:133 - [broker-0-to-controller-forwarding-channel-manager]: Starting2026-05-15 14:04:31 INFO [client-metrics-reaper] SystemTimerReaper$Reaper:133 - [client-metrics-reaper]: Starting2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:31 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:32 INFO [kafka-0-raft-io-thread] KafkaRaftClient:3430 - [RaftManager id=0] Registered the listener org.apache.kafka.image.loader.MetadataLoader@8200783822026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water ma rk yet.2026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:32 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:33 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:33 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:33 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:33 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:33 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:33 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:33 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:33 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [main] ConnectionQuotas:69 - Updated connection-accept-rate max connection creation rate to 21474836472026-05-15 14:04:34 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [kafka-0-me tadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:34 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:35 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:35 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:35 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:35 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:35 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:35 INFO [kafka-0-raft-io-thread] QuorumState:732 - [RaftManager id=0] Attempting durable transition to FollowerState(fetchTimeoutMs=2000, epoch=1, leader=1, leaderEndpoints=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=kafka-cluster-controller-1.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc/:9090}), votedKey=Optional.empty, voters=[1], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) from UnattachedState(epoch=1, leaderId=OptionalInt.empty, votedKey=Optional.empty, voters=[1], electionTimeoutMs=1092, highWatermark=Optional.empty)2026-05-15 14:04:35 INFO [kafka-0-metadata-loader-event-handler] MetadataLoader:233 - [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet.2026-05-15 14:04:35 INFO [kafka-0-raft-io-thread] QuorumState:749 - [RaftManager id=0] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=1, leader=1, leaderEndpoints=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=kafka-cluster-controller-1.kafka-cluster-kafka-brokers.jx-greencapitaltrade-thor-pr-512.svc/:9090}), votedKey=Optional.empty, voters=[1], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) from UnattachedState(epoch=1, leaderId=OptionalInt.empty, votedKey=Optional.empty, voters=[1], electionTimeoutMs=1092, highWatermark=Optional.empty)2026-05-15 14:04:35 ERROR [kafka-0-raft-io-thread] ProcessTerminatingFaultHandler:46 - Encountered fatal fault: Unexpected error in raft IO threadjava.lang.IllegalArgumentException: Attempt to truncate to offset 3629, which is below the current high watermark 7250 at kafka.raft.KafkaMetadataLog.truncateTo(KafkaMetadataLog.scala:176) ~[kafka_2.13-4.1.0.jar:?] at org.apache.kafka.raft.ReplicatedLog.truncateToEndOffset(ReplicatedLog.java:235) ~[kafka-raft-4.1.0.jar:?] at kafka.raft.KafkaMetadataLog.truncateToEndOffset(KafkaMetadataLog.scala:55) ~[kafka_2.13-4.1.0.jar:?] at org.apache.kafka.raft.KafkaRaftClient.handleFetchResponse(KafkaRaftClient.java:1751) ~[kafka-raft-4.1.0.jar:?] at org.apache.kafka.raft.KafkaRaftClient.handleResponse(KafkaRaftClient.java:2605) ~[kafka-raft-4.1.0.jar:?] at org.apache.kafka.raft.KafkaRaftClient.handleInboundMessage(KafkaRaftClient.java:2759) ~[kafka-raft-4.1.0.jar:?] at org.apache.kafka.raft.KafkaRaftClient.poll(KafkaRaftClient.java:3535) ~[kafka-raft-4.1.0.jar:?] at org.apache.kafka.raft.KafkaRaftClientDriver.doWork(KafkaRaftClientDriver.java:64) [kafka-raft-4.1.0.jar:?] at org.apache.kafka.server.util.ShutdownableThread.run(Shutdownable Thread.java:136) [kafka-server-common-4.1.0.jar:?]: failed to run helmfile sync: failed to run 'helmfile --file preview/helmfile.yaml.gotmpl sync' command in directory '', output: 'Adding repo bitnami https://charts.bitnami.com/bitnami "bitnami" has been added to your repositories Adding repo jx3 https://jenkins-x-charts.github.io/repo "jx3" has been added to your repositories Building dependency release=postgresql, chart=postgresql Building dependency release=redis, chart=redis Building dependency release=kafka-cluster, chart=kafka-cluster Building dependency release=preview, chart=../charts/thor Upgrading release=postgresql, chart=postgresql Upgrading release=jx-verify, chart=jx3/jx-verify Upgrading release=kafka-cluster, chart=kafka-cluster Upgrading release=redis, chart=redis hook[presync] logs | Upgrading release=preview, chart=../charts/thor Release "kafka-cluster" has been upgraded. Happy Helming! NAME: kafka-cluster LAST DEPLOYED: Fri May 15 13:57:11 2026 NAMESPACE: jx-greencapitaltrade-thor-pr-512 STATUS: deployed REVISION: 3 TEST SUITE: None Listing releases matching ^kafka-cluster$ Release "jx-verify" has been upgraded. Happy Helming! NAME: jx-verify LAST DEPLOYED: Fri May 15 13:57:12 2026 NAMESPACE: jx-greencapitaltrade-thor-pr-512 STATUS: deployed REVISION: 3 TEST SUITE: None NOTES: jx-verify-pods has been installed successfully. To verify the Job is setup correctly, run: kubectl --namespace=jx-greencapitaltrade-thor-pr-512 get job -l "app=jx-verify-gc-jobs-qej2r" Listing releases matching ^jx-verify$ kafka-cluster jx-greencapitaltrade-thor-pr-512 3 2026-05-15 13:57:11.225055512 +0000 UTC deployed kafka-cluster-0.1.0 4.1.0 Release "postgresql" has been upgraded. Happy Helming! NAME: postgresql LAST DEPLOYED: Fri May 15 13:57:11 2026 NAMESPACE: jx-greencapitaltrade-thor-pr-512 STATUS: deployed REVISION: 3 TEST SUITE: None NOTES: CHART NAME: postgresql CHART VERSION: 11.9.13 APP VERSION: 14.5.0 ** Please be patient while the chart is being deployed ** PostgreSQL can be accessed via port 5432 on the following DNS names from within your cluster: postgresql.jx-greencapitaltrade-thor-pr-512.svc.cluster.local - Read/Write connection To get the password for "postgres" run: export POSTGRES_ADMIN_PASSWORD=$(kubectl get secret --namespace jx-greencapitaltrade-thor-pr-512 greencapitaltrade-thor-pr-512-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d) To get the password for "thor" run: export POSTGRES_PASSWORD=$(kubectl get secret --namespace jx-greencapitaltrade-thor-pr-512 greencapitaltrade-thor-pr-512-postgresql -o jsonpath="{.data.password}" | base64 -d) To connect to your database run the following command: kubectl run postgresql-client --rm --tty -i --restart='Never' --namespace jx-greencapitaltrade-thor-pr-512 --image docker.io/bitnamilegacy/postgresql:14.5.0-debian-11-r35 --env="PGPASSWORD=$POSTGRES_PASSWORD" \ --command -- psql --host postgresql -U thor -d postgres -p 5432 > NOTE: If you access the container using bash, make sure that you execute "/opt/bitnami/scripts/postgresql/entrypoint.sh /bin/bash" in order to avoid the error "psql: local user with ID 1001} does not exist" To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace jx-greencapitaltrade-thor-pr-512 svc/postgresql 5432:5432 & PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U thor -d postgres -p 5432 Listing releases matching ^postgresql$ jx-verify jx-greencapitaltrade-thor-pr-512 3 2026-05-15 13:57:12.425603654 +0000 UTC deployed jx-verify-0.3.19 0.3.19 postgresql jx-greencapitaltrade-thor-pr-512 3 2026-05-15 13:57:11.830528142 +0000 UTC deployed postgresql-11.9.13 14.5.0 Release "redis" has been upgraded. Happy Helming! NAME: redis LAST DEPLOYED: Fri May 15 13:57:11 2026 NAMESPACE: jx-greencapitaltrade-thor-pr-512 STATUS: deployed REVISION: 3 TEST SUITE: None NOTES: CHART NAME: redis CHART VERSION: 18.6.1 APP VERSION: 7.2.3 ** Please be patient while the chart is being deployed ** Redis® can be accessed via port 6379 on the following DNS name from within your cluster: redis-master.jx-greencapitaltrade-thor-pr-512.svc.cluster.local To connect to your Redis® server: 1. Run a Redis® pod that you can use as a client: kubectl run --namespace jx-greencapitaltrade-thor-pr-512 redis-client --restart='Never' --image docker.io/bitnamilegacy/redis:7.2.3-debian-11-r2 --command -- sleep infinity Use the following command to attach to the pod: kubectl exec --tty -i redis-client \ --namespace jx-greencapitaltrade-thor-pr-512 -- bash 2. Connect using the Redis® CLI: redis-cli -h redis-master To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace jx-greencapitaltrade-thor-pr-512 svc/redis-master 6379:6379 & redis-cli -h 127.0.0.1 -p 6379 Listing releases matching ^redis$ redis jx-greencapitaltrade-thor-pr-512 3 2026-05-15 13:57:11.624937443 +0000 UTC deployed redis-18.6.1 7.2.3 UPDATED RELEASES: NAME NAMESPACE CHART VERSION DURATION kafka-cluster jx-greencapitaltrade-thor-pr-512 ./kafka-cluster 0.1.0 8s jx-verify jx-greencapitaltrade-thor-pr-512 jx3/jx-verify 0.3.19 8s postgresql jx-greencapitaltrade-thor-pr-512 ./postgresql 11.9.13 9s redis jx-greencapitaltrade-thor-pr-512 ./redis 18.6.1 9s FAILED RELEASES: NAME NAMESPACE CHART VERSION DURATION preview jx-greencapitaltrade-thor-pr-512 ../charts/thor 10m9s in preview/helmfile.yaml.gotmpl: failed processing release preview: command "/usr/bin/helm" exited with non-zero status: PATH: /usr/bin/helm ARGS: 0: helm (4 bytes) 1: upgrade (7 bytes) 2: --install (9 bytes) 3: preview (7 bytes) 4: ../charts/thor (14 bytes) 5: --wait (6 bytes) 6: --timeout (9 bytes) 7: 600s (4 bytes) 8: --create-namespace (18 bytes) 9: --namespace (11 bytes) 10: jx-greencapitaltrade-thor-pr-512 (32 bytes) 11: --values (8 bytes) 12: /tmp/helmfile4222222071/jx-greencapitaltrade-thor-pr-512-preview-values-57796665 (80 bytes) 13: --values (8 bytes) 14: /tmp/helmfile3031046422/jx-greencapitaltrade-thor-pr-512-preview-values-7c4dd59455 (82 bytes) 15: --reset-values (14 bytes) 16: --history-max (13 bytes) 17: 10 (2 bytes) ERROR: exit status 1 EXIT STATUS 1 STDERR: Error: UPGRADE FAILED: context deadline exceeded COMBINED OUTPUT: Error: UPGRADE FAILED: context deadline exceeded'  Pipeline failed on stage 'from-build-pack' : container 'step-promote-jx-preview'. The execution of the pipeline has stopped.