Skip to content

Last updated at 2025-03-22 Posted at 2025-03-20

CiliumのKubernetesを構築する

目次

概要

  • Ciliumを使用する場合のKubernetesクラスターの構築方法を確認する
  • CiliumはeBPFを活用したネットワークツールである
  • CNI、LB、Ingressを有効にして、ネットワーク周りを最適化する

環境

こちらの手順でkubeadm, kubelet, kubectl, CRI-Oをインストールしておきます

kubeadmの初期化

kubeadm init

コントロールプレーンで以下を実行
ネットワーク設定をCiliumに置き換えるため、kube-proxyのインストールをスキップする

sh
sudo kubeadm init --cri-socket \
  /var/run/crio/crio.sock \
  --pod-network-cidr=10.1.1.0/24 \
  --skip-phases=addon/kube-proxy

実行ログ

log
carm1:~$ sudo kubeadm init --cri-socket \
  /var/run/crio/crio.sock \
  --pod-network-cidr=10.1.1.0/24 \
  --skip-phases=addon/kube-proxy
W0319 22:30:32.524132    2360 initconfiguration.go:126] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
[init] Using Kubernetes version: v1.32.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [carm1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.28]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [carm1 localhost] and IPs [192.168.0.28 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [carm1 localhost] and IPs [192.168.0.28 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.441571ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 4.00191341s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node carm1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node carm1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 2mxzsn.03i3cltu81cwf44c
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.28:6443 --token 2mxzsn.03i3cltu81cwf44c \
        --discovery-token-ca-cert-hash sha256:8fecaf8058bc176f6270a727e97b05ec30d333fc9eb2651986079564546fd75e

コントロールプレーンでkubectlコマンドが実行できるようにする

sh
mkdir -p $HOME/.kube
sudo chown $(id -u):$(id -g) -R /etc/kubernetes
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.profile 
source ~/.profile

kubeadm join

sh
sudo kubeadm join 192.168.0.28:6443 --token 2mxzsn.03i3cltu81cwf44c \
        --discovery-token-ca-cert-hash sha256:8fecaf8058bc176f6270a727e97b05ec30d333fc9eb2651986079564546fd75e

実行ログ

log
carw1:~$ sudo kubeadm join 192.168.0.28:6443 --token 2mxzsn.03i3cltu81cwf44c \
        --discovery-token-ca-cert-hash sha256:8fecaf8058bc176f6270a727e97b05ec30d333fc9eb2651986079564546fd75e 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
W0319 22:33:55.796338    2404 configset.go:78] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" is forbidden: User "system:bootstrap:2mxzsn" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.274077ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

確認

nodeの確認

sh
kubectl get nodes -o wide

実行ログ

log
carm1:~$ kubectl get nodes -o wide
NAME    STATUS     ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
carm1   NotReady   control-plane   6m41s   v1.32.3   192.168.0.28   <none>        Ubuntu 24.04.2 LTS   6.8.0-55-generic   cri-o://1.32.1
carw1   NotReady   <none>          3m37s   v1.32.3   192.168.0.29   <none>        Ubuntu 24.04.2 LTS   6.8.0-55-generic   cri-o://1.32.1

podの確認

sh
kubectl get pod -A

実行ログ

log
carm1:~$ kubectl get pod -A
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-668d6bf9bc-4qf4n        0/1     Pending   0          9m13s
kube-system   coredns-668d6bf9bc-slnhl        0/1     Pending   0          9m13s
kube-system   etcd-carm1                      1/1     Running   0          9m18s
kube-system   kube-apiserver-carm1            1/1     Running   0          9m18s
kube-system   kube-controller-manager-carm1   1/1     Running   0          9m19s
kube-system   kube-scheduler-carm1            1/1     Running   0          9m18s

Helmインストール

後々使うのであらかじめ入れておく

sh
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

実行ログ

log
carm1:~$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11913  100 11913    0     0  48466      0 --:--:-- --:--:-- --:--:-- 48426
Downloading https://get.helm.sh/helm-v3.17.2-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
carm1:~$ helm version --short
v3.17.2+gcc0bbbd

Ciliumインストール

Ciliumのメトリクスを出力するにはPrometheusが必要だが、CNIがない状態ではPrometheusがインストールできない
そのためCiliumのインストールを2回に分けて行う

Cilium CLIインストール

公式の手順の通りに進める
Raspberry PiなどARM系CPUの場合はCLI_ARCH=arm64に変更する

sh
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

実行ログ

log
carm1:~$ CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 54.1M  100 54.1M    0     0  19.4M      0  0:00:02  0:00:02 --:--:-- 32.4M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    92  100    92    0     0    159      0 --:--:-- --:--:-- --:--:--     0
cilium-linux-amd64.tar.gz: OK
cilium
carm1:~$ cilium version
cilium-cli: v0.18.2 compiled with go1.24.0 on linux/amd64
cilium image (default): v1.17.0
cilium image (stable): v1.17.2
cilium image (running): unknown. Unable to obtain cilium version. Reason: release: not found

Ciliumインストール

以下の公式のインストール手順を参考にオプションを設定している

loadbalancerModeはsharedに変更している
sharedにすると、各Ingressが1つのIngressControllerを経由するようになる
自宅サーバーでIPが1つしかないので、1つのIPを使い回すようにしている

sh
CILIUM_VERSION=1.17.2
QPS=30
BURST=20
API_SERVER_IP=192.168.0.28
API_SERVER_PORT=6443
cilium install --version ${CILIUM_VERSION} \
  --namespace kube-system \
  --set l2announcements.enabled=true \
  --set k8sClientRateLimit.qps=${QPS} \
  --set k8sClientRateLimit.burst=${BURST} \
  --set kubeProxyReplacement=true \
  --set k8sServiceHost=${API_SERVER_IP} \
  --set k8sServicePort=${API_SERVER_PORT} \
  --set ingressController.enabled=true \
  --set ingressController.loadbalancerMode=shared

実行ログ

log
carm1:~$ CILIUM_VERSION=1.17.2
QPS=30
BURST=20
API_SERVER_IP=192.168.0.28
API_SERVER_PORT=6443
cilium install --version ${CILIUM_VERSION} \
  --namespace kube-system \
  --set l2announcements.enabled=true \
  --set k8sClientRateLimit.qps=${QPS} \
  --set k8sClientRateLimit.burst=${BURST} \
  --set kubeProxyReplacement=true \
  --set k8sServiceHost=${API_SERVER_IP} \
  --set k8sServicePort=${API_SERVER_PORT} \
  --set ingressController.enabled=true \
  --set ingressController.loadbalancerMode=shared
ℹ️  Using Cilium version 1.17.2
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has not been installed
ℹ️  Cilium will fully replace all functionalities of kube-proxy
log
carm1:~$ kubectl get pod -n kube-system -w
NAME                               READY   STATUS    RESTARTS   AGE
cilium-9lfnn                       1/1     Running   0          89s
cilium-dgnbk                       1/1     Running   0          89s
cilium-envoy-8pgq6                 1/1     Running   0          89s
cilium-envoy-wrn8v                 1/1     Running   0          89s
cilium-operator-5747c74f4d-v4kv5   1/1     Running   0          89s
coredns-668d6bf9bc-4qf4n           1/1     Running   0          4h33m
coredns-668d6bf9bc-slnhl           1/1     Running   0          4h33m
etcd-carm1                         1/1     Running   0          4h34m
kube-apiserver-carm1               1/1     Running   0          4h34m
kube-controller-manager-carm1      1/1     Running   0          4h34m
kube-scheduler-carm1               1/1     Running   0          4h34m

ポリシー

すべての通信を許可するポリシーを作成する

sh
mkdir -p ~/yaml/cilium
cat <<EOF > ~/yaml/cilium/policy.yml
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
  name: policy1
spec:
  nodeSelector:
    matchExpressions:
      - key: node-role.kubernetes.io/control-plane
        operator: DoesNotExist
  interfaces:
  - ^eth[0-9]+
  externalIPs: true
  loadBalancerIPs: true
EOF
kubectl apply -f ~/yaml/cilium/policy.yml

IPプール

External-IPとして利用するIPアドレスの範囲を設定する

sh
cat <<EOF > ~/yaml/cilium/ippool.yml
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "ip-pool"
spec:
  blocks:
  - start: "192.168.0.30"
    stop: "192.168.0.39"
EOF
kubectl apply -f ~/yaml/cilium/ippool.yml

Prometheusインストール

kube-prometheus-stackを使いインストールする

sh
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create ns monitoring
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitoring

実行ログ

log
carm1:~$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create ns monitoring
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitoring
"prometheus-community" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "prometheus-community" chart repository
Update Complete. ⎈Happy Helming!⎈
namespace/monitoring created
Release "kube-prometheus-stack" does not exist. Installing it now.
NAME: kube-prometheus-stack
LAST DEPLOYED: Thu Mar 20 03:20:36 2025
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"

Get Grafana 'admin' user password by running:

  kubectl --namespace monitoring get secrets kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo

Access Grafana local instance:

  export POD_NAME=$(kubectl --namespace monitoring get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus-stack" -oname)
  kubectl --namespace monitoring port-forward $POD_NAME 3000

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
log
carm1:~$ kubectl get pod -n monitoring
NAME                                                        READY   STATUS    RESTARTS   AGE
alertmanager-kube-prometheus-stack-alertmanager-0           2/2     Running   0          2m45s
kube-prometheus-stack-grafana-5dbf96c9c-bkvxp               3/3     Running   0          2m53s
kube-prometheus-stack-kube-state-metrics-689ffbf56b-vdrtq   1/1     Running   0          2m53s
kube-prometheus-stack-operator-647b976999-7xjt7             1/1     Running   0          2m53s
kube-prometheus-stack-prometheus-node-exporter-h6x9z        1/1     Running   0          2m53s
kube-prometheus-stack-prometheus-node-exporter-shvjq        1/1     Running   0          2m53s
prometheus-kube-prometheus-stack-prometheus-0               2/2     Running   0          2m45s

CiliumのメトリクスをPrometheusに連携する

オプションが多いのでvalues.yamlを使う
最初のインストールから追加したもの

  • Hubbleを有効
  • メトリクスをPrometheusに連携
sh
API_SERVER_IP=192.168.0.28
mkdir -p ~/yaml/cilium
cat <<EOF > ~/yaml/cilium/values.yaml
l2announcements:
  enabled: true
k8sClientRateLimit:
  qps: 30
  burst: 20
kubeProxyReplacement: true
k8sServiceHost: ${API_SERVER_IP}
k8sServicePort: 6443
ingressController:
  enabled: true
  loadbalancerMode: shared
hubble:
  enabled: true
  relay:
    enabled: false
  ui:
    enabled: false
prometheus:
  metricsService: true
  enabled: true
  serviceMonitor:
    enabled: true
    labels:
      release: kube-prometheus-stack
    namespace: "monitoring"
envoy:
  prometheus:
    enabled: true
    serviceMonitor:
      enabled: true
      labels:
        release: kube-prometheus-stack
      namespace: "monitoring"
operator:
  prometheus:
    metricsService: true
    enabled: true
    serviceMonitor:
      enabled: true
      labels:
        release: kube-prometheus-stack
EOF
sh
CILIUM_VERSION=1.17.2
cilium upgrade --version ${CILIUM_VERSION} \
  --namespace kube-system \
  -f ~/yaml/cilium/values.yaml

実行ログ

log
carm1:~$ CILIUM_VERSION=1.17.2
cilium upgrade --version ${CILIUM_VERSION} \
  --namespace kube-system \
  -f ~/yaml/cilium/values.yaml
ℹ️  Using Cilium version 1.17.2
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has not been installed
ℹ️  Cilium will fully replace all functionalities of kube-proxy
log
carm1:~$ kubectl get pod -n kube-system -w
NAME                               READY   STATUS    RESTARTS   AGE
cilium-bnfl7                       1/1     Running   0          62s
cilium-envoy-8pgq6                 1/1     Running   0          27m
cilium-envoy-wrn8v                 1/1     Running   0          27m
cilium-kvj6h                       1/1     Running   0          62s
cilium-operator-79565dfd9c-4n726   1/1     Running   0          63s
coredns-668d6bf9bc-4qf4n           1/1     Running   0          5h
coredns-668d6bf9bc-slnhl           1/1     Running   0          5h
etcd-carm1                         1/1     Running   0          5h
kube-apiserver-carm1               1/1     Running   0          5h
kube-controller-manager-carm1      1/1     Running   0          5h
kube-scheduler-carm1               1/1     Running   0          5h
Prometheusのメトリクスを確認
log
carm1:~$ kubectl port-forward -n monitoring svc/kube-prometheus-stack-prometheus 9090:9090 &
[1] 6344
carm1:~$ Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
carm1:~$ curl -s http://localhost:9090/api/v1/label/__name__/values | jq | grep cilium
Handling connection for 9090
    "cilium_act_processing_time_seconds_bucket",
    "cilium_act_processing_time_seconds_count",
    "cilium_act_processing_time_seconds_sum",
    "cilium_agent_api_process_time_seconds_bucket",
    "cilium_agent_api_process_time_seconds_count",
    "cilium_agent_api_process_time_seconds_sum",
    "cilium_agent_bootstrap_seconds_bucket",
    "cilium_agent_bootstrap_seconds_count",
    "cilium_agent_bootstrap_seconds_sum",
    "cilium_bpf_map_capacity",
    "cilium_bpf_map_ops_total",
    "cilium_bpf_map_pressure",
    "cilium_bpf_maps",
    "cilium_bpf_maps_virtual_memory_max_bytes",
    "cilium_bpf_progs",
    "cilium_bpf_progs_virtual_memory_max_bytes",
    "cilium_cidrgroups_referenced",
    "cilium_controllers_failing",
    "cilium_controllers_group_runs_total",
    "cilium_controllers_runs_duration_seconds_bucket",
    "cilium_controllers_runs_duration_seconds_count",
    "cilium_controllers_runs_duration_seconds_sum",
    "cilium_controllers_runs_total",
    "cilium_datapath_conntrack_dump_resets_total",
    "cilium_datapath_conntrack_gc_duration_seconds_bucket",
    "cilium_datapath_conntrack_gc_duration_seconds_count",
    "cilium_datapath_conntrack_gc_duration_seconds_sum",
    "cilium_datapath_conntrack_gc_entries",
    "cilium_datapath_conntrack_gc_key_fallbacks_total",
    "cilium_datapath_conntrack_gc_runs_total",
    "cilium_drift_checker_config_delta",
    "cilium_drop_bytes_total",
    "cilium_drop_count_total",
    "cilium_endpoint",
    "cilium_endpoint_regeneration_time_stats_seconds_bucket",
    "cilium_endpoint_regeneration_time_stats_seconds_count",
    "cilium_endpoint_regeneration_time_stats_seconds_sum",
    "cilium_endpoint_regenerations_total",
    "cilium_endpoint_state",
    "cilium_errors_warnings_total",
    "cilium_event_ts",
    "cilium_feature_adv_connect_and_lb_bandwidth_manager_enabled",
    "cilium_feature_adv_connect_and_lb_bgp_enabled",
    "cilium_feature_adv_connect_and_lb_cilium_envoy_config_enabled",
    "cilium_feature_adv_connect_and_lb_cilium_node_config_enabled",
    "cilium_feature_adv_connect_and_lb_egress_gateway_enabled",
    "cilium_feature_adv_connect_and_lb_envoy_proxy_enabled",
    "cilium_feature_adv_connect_and_lb_k8s_internal_traffic_policy_enabled",
    "cilium_feature_adv_connect_and_lb_kube_proxy_replacement_enabled",
    "cilium_feature_adv_connect_and_lb_l2_lb_enabled",
    "cilium_feature_adv_connect_and_lb_l2_pod_announcement_enabled",
    "cilium_feature_adv_connect_and_lb_node_port_configuration",
    "cilium_feature_adv_connect_and_lb_sctp_enabled",
    "cilium_feature_adv_connect_and_lb_vtep_enabled",
    "cilium_feature_controlplane_cilium_endpoint_slices_enabled",
    "cilium_feature_controlplane_identity_allocation",
    "cilium_feature_controlplane_ipam",
    "cilium_feature_datapath_chaining_enabled",
    "cilium_feature_datapath_config",
    "cilium_feature_datapath_internet_protocol",
    "cilium_feature_datapath_network",
    "cilium_feature_network_policies_host_firewall_enabled",
    "cilium_feature_network_policies_internal_traffic_policy_services_total",
    "cilium_feature_network_policies_local_redirect_policy_enabled",
    "cilium_feature_network_policies_mutual_auth_enabled",
    "cilium_feature_network_policies_non_defaultdeny_policies_enabled",
    "cilium_forward_bytes_total",
    "cilium_forward_count_total",
    "cilium_fqdn_gc_deletions_total",
    "cilium_fqdn_selectors",
    "cilium_hive_status",
    "cilium_identity",
    "cilium_identity_cache_timer_duration_bucket",
    "cilium_identity_cache_timer_duration_count",
    "cilium_identity_cache_timer_duration_sum",
    "cilium_identity_cache_timer_trigger_folds_bucket",
    "cilium_identity_cache_timer_trigger_folds_count",
    "cilium_identity_cache_timer_trigger_folds_sum",
    "cilium_identity_cache_timer_trigger_latency_bucket",
    "cilium_identity_cache_timer_trigger_latency_count",
    "cilium_identity_cache_timer_trigger_latency_sum",
    "cilium_identity_label_sources",
    "cilium_ip_addresses",
    "cilium_ipam_capacity",
    "cilium_ipam_events_total",
    "cilium_ipcache_errors_total",
    "cilium_k8s_client_api_calls_total",
    "cilium_k8s_client_api_latency_time_seconds_bucket",
    "cilium_k8s_client_api_latency_time_seconds_count",
    "cilium_k8s_client_api_latency_time_seconds_sum",
    "cilium_k8s_client_rate_limiter_duration_seconds_bucket",
    "cilium_k8s_client_rate_limiter_duration_seconds_count",
    "cilium_k8s_client_rate_limiter_duration_seconds_sum",
    "cilium_k8s_terminating_endpoints_events_total",
    "cilium_kubernetes_events_received_total",
    "cilium_kubernetes_events_total",
    "cilium_nat_endpoint_max_connection",
    "cilium_node_connectivity_latency_seconds",
    "cilium_node_connectivity_status",
    "cilium_node_health_connectivity_latency_seconds_bucket",
    "cilium_node_health_connectivity_latency_seconds_count",
    "cilium_node_health_connectivity_latency_seconds_sum",
    "cilium_node_health_connectivity_status",
    "cilium_nodes_all_datapath_validations_total",
    "cilium_nodes_all_events_received_total",
    "cilium_nodes_all_num",
    "cilium_operator_doublewrite_crd_identities",
    "cilium_operator_doublewrite_crd_only_identities",
    "cilium_operator_doublewrite_kvstore_identities",
    "cilium_operator_doublewrite_kvstore_only_identities",
    "cilium_operator_errors_warnings_total",
    "cilium_operator_feature_adv_connect_and_lb_gateway_api_enabled",
    "cilium_operator_feature_adv_connect_and_lb_ingress_controller_enabled",
    "cilium_operator_feature_adv_connect_and_lb_l7_aware_traffic_management_enabled",
    "cilium_operator_feature_adv_connect_and_lb_lb_ipam_enabled",
    "cilium_operator_feature_adv_connect_and_lb_node_ipam_enabled",
    "cilium_operator_identity_gc_entries",
    "cilium_operator_identity_gc_latency",
    "cilium_operator_identity_gc_runs",
    "cilium_operator_lbipam_conflicting_pools",
    "cilium_operator_lbipam_services_matching",
    "cilium_operator_lbipam_services_unsatisfied",
    "cilium_operator_number_of_ceps_per_ces_bucket",
    "cilium_operator_number_of_ceps_per_ces_count",
    "cilium_operator_number_of_ceps_per_ces_sum",
    "cilium_operator_process_cpu_seconds_total",
    "cilium_operator_process_max_fds",
    "cilium_operator_process_network_receive_bytes_total",
    "cilium_operator_process_network_transmit_bytes_total",
    "cilium_operator_process_open_fds",
    "cilium_operator_process_resident_memory_bytes",
    "cilium_operator_process_start_time_seconds",
    "cilium_operator_process_virtual_memory_bytes",
    "cilium_operator_process_virtual_memory_max_bytes",
    "cilium_operator_statedb_table_contention_seconds_bucket",
    "cilium_operator_statedb_table_contention_seconds_count",
    "cilium_operator_statedb_table_contention_seconds_sum",
    "cilium_operator_statedb_table_graveyard_objects",
    "cilium_operator_statedb_table_objects",
    "cilium_operator_statedb_table_revision",
    "cilium_operator_statedb_write_txn_duration_seconds_bucket",
    "cilium_operator_statedb_write_txn_duration_seconds_count",
    "cilium_operator_statedb_write_txn_duration_seconds_sum",
    "cilium_operator_unmanaged_pods",
    "cilium_operator_version",
    "cilium_policy",
    "cilium_policy_change_total",
    "cilium_policy_endpoint_enforcement_status",
    "cilium_policy_implementation_delay_bucket",
    "cilium_policy_implementation_delay_count",
    "cilium_policy_implementation_delay_sum",
    "cilium_policy_l7_total",
    "cilium_policy_max_revision",
    "cilium_policy_regeneration_total",
    "cilium_policy_selector_match_count_max",
    "cilium_process_cpu_seconds_total",
    "cilium_process_max_fds",
    "cilium_process_network_receive_bytes_total",
    "cilium_process_network_transmit_bytes_total",
    "cilium_process_open_fds",
    "cilium_process_resident_memory_bytes",
    "cilium_process_start_time_seconds",
    "cilium_process_virtual_memory_bytes",
    "cilium_process_virtual_memory_max_bytes",
    "cilium_service_implementation_delay_bucket",
    "cilium_service_implementation_delay_count",
    "cilium_service_implementation_delay_sum",
    "cilium_services_events_total",
    "cilium_subprocess_start_total",
    "cilium_unreachable_health_endpoints",
    "cilium_unreachable_nodes",
    "cilium_version",
carm1:~$ fg
kubectl port-forward -n monitoring svc/kube-prometheus-stack-prometheus 9090:9090
^Ccarm1:~$