-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Description
What happened:
Since at least version 4.13.0 it seems like the real ip is no more resolved and written to the logs with use-forwarded-headers / use-proxy-protocol enabled on openstack clusters. I see an internal cluster ip (loadbalancer?) in the logs. It looks like the x-forwarded-... headers are no more set "properly". This issue currently prevents seeing who is sending requests to the cluster and applying any rate-limits.
What you expected to happen:
Get the real ip extracted from the x-forwarded-... headers, written to the logs and the rate-limit to be applied.
NGINX Ingress controller version
NGINX Ingress controller
Release: v1.14.0
Build: 52c0a83ac9bc72e9ce1b9fe4f2d6dcc8854516a8
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.27.1
-------------------------------------------------------------------------------
Kubernetes version (use kubectl version):
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.31.4
Environment:
-
Cloud provider: PlusServer
-
OS (e.g. from /etc/os-release): flatcar 3975.2.2
-
Kernel (e.g.
uname -a): n/a -
Install tools: n/a
-
Basic cluster related info:
kubectl version: v1.31.4kubectl get nodes -o wide: shoot--425629--dev-worker-tvdim-z1-747d8-rvgs9 Ready 121m v1.31.4 10.250.0.113 Flatcar Container Linux by Kinvolk 3975.2.2 (Oklo) 6.6.54-flatcar containerd://1.7.17
-
How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress - If helm was used then please show output of
helm -n <ingresscontrollernamespace> get values <helmreleasename> - If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
kustomization.yaml:
- If helm was used then please show output of
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: ingress-nginx
namespace: ingress-nginx
includeCRDs: true
repo: https://kubernetes.github.io/ingress-nginx
releaseName: ingress-nginx
version: 4.14.0
valuesFile: values-pkse-dev.yaml
apiVersions:
- monitoring.coreos.com/v1
values.yaml:
rbac:
create: true
controller:
kind: DaemonSet
publishService:
enabled: true
config:
use-forwarded-headers: "true"
use-proxy-protocol: "true"
service:
annotations:
loadbalancer.openstack.org/proxy-protocol: "true"
-
if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
-
Current State of the controller:
kubectl describe ingressclasses:
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.14.0
helm.sh/chart=ingress-nginx-4.14.0
Annotations: argocd.argoproj.io/tracking-id: ingress-nginx-pkse-dev:networking.k8s.io/IngressClass:ingress-nginx/nginx
Controller: k8s.io/ingress-nginx
Events: <none>`
kubectl -n <ingresscontrollernamespace> get all -A -o wide:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ingress-nginx-controller-vn2nr 1/1 Running 0 30m 10.96.0.72 shoot--425629--dev-worker-tvdim-z1-747d8-x825r <none> <none>
pod/ingress-nginx-controller-xv5b6 1/1 Running 0 30m 10.96.1.63 shoot--425629--dev-worker-tvdim-z1-747d8-rvgs9 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller LoadBalancer 10.125.20.114 <redacted> 80:31148/TCP,443:31347/TCP 477d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 10.119.129.134 <none> 443/TCP 477d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/ingress-nginx-controller 2 2 2 2 2 kubernetes.io/os=linux 117m controller registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Name: ingress-nginx-controller-vn2nr
Namespace: ingress-nginx
Priority: 0
Service Account: ingress-nginx
Node: shoot--425629--dev-worker-tvdim-z1-747d8-x825r/10.250.2.64
Start Time: Tue, 04 Nov 2025 09:02:44 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.14.0
controller-revision-hash=66cd4c45ff
helm.sh/chart=ingress-nginx-4.14.0
pod-template-generation=5
Annotations: <none>
Status: Running
IP: 10.96.0.72
IPs:
IP: 10.96.0.72
Controlled By: DaemonSet/ingress-nginx-controller
Containers:
controller:
Container ID: containerd://2b9f6c6a26b4daa9b988a0d9bfc58181b43a0b960bbb53554b540c9d68294a41
Image: registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Tue, 04 Nov 2025 09:02:45 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-vn2nr (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
KUBERNETES_SERVICE_HOST: api.dev.425629.internal.prod.gardener.get-cloud.io
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sbzt2 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-sbzt2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-vn2nr to shoot--425629--dev-worker-tvdim-z1-747d8-x825r
Normal Pulled 31m kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.14.0@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d" already present on machine
Normal Created 31m kubelet Created container controller
Normal Started 31m kubelet Started container controller
Normal RELOAD 31m nginx-ingress-controller NGINX reload triggered due to a change in configuration
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>:
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.14.0
helm.sh/chart=ingress-nginx-4.14.0
Annotations: argocd.argoproj.io/tracking-id: ingress-nginx-pkse-dev:/Service:ingress-nginx/ingress-nginx-controller
loadbalancer.openstack.org/load-balancer-address: <redacted>
loadbalancer.openstack.org/load-balancer-id: 466bfbec-e357-429a-97ff-71a2ba054232
loadbalancer.openstack.org/proxy-protocol: true
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.125.20.114
IPs: 10.125.20.114
LoadBalancer Ingress: <redacted> (Proxy)
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31148/TCP
Endpoints: 10.96.1.63:80,10.96.0.72:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31347/TCP
Endpoints: 10.96.1.63:443,10.96.0.72:443
Session Affinity: None
External Traffic Policy: Cluster
Internal Traffic Policy: Cluster
Events: <none>
How to reproduce this issue:
Unfortunately I have no idea on how to reproduce this with minikube due to the lack of access to a loadbalancer and knowhow on setting up LoadBalancer. But I set up a clean cluster, installed ingress-nginx and installed web-hooktester, than I send requests via curl to the cluster.
Test-Request ingres-nginx v4.12.7: curl -X POST --data '{"nginx": "4.12.7"}' https://webhook-tester.dev.***.com/5fa6cb63-3138-4ce1-b65a-4ebe87c9e363
Headers in the cluster:
Accept: */*
Content-Length: 19
Content-Type: application/x-www-form-urlencoded
User-Agent: curl/8.15.0
X-Forwarded-For: 168.119.170.221
X-Forwarded-Host: webhook-tester.dev.***.com
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Scheme: https
X-Real-Ip: 168.119.170.221
X-Request-Id: a26e14a98c2f014a1af7afecc30dd0b5
X-Scheme: https
Log:
ingress-nginx-controller-878g5 controller 168.119.170.221 - - [04/Nov/2025:09:48:19 +0000] "POST /5fa6cb63-3138-4ce1-b65a-4ebe87c9e363 HTTP/2.0" 200 18 "-" "curl/8.15.0" 119 0.001 [webhook-tester-webhook-tester-8081] [] 10.96.1.114:8080 18 0.001 200 82f340c1185adcf156715bf0b29efc66
Test-Request ingres-nginx v4.14.0: curl -X POST --data '{"nginx": "4.12.7"}' https://webhook-tester.dev.***.com/5fa6cb63-3138-4ce1-b65a-4ebe87c9e363
Headers in the cluster:
Accept: */*
Content-Length: 19
Content-Type: application/x-www-form-urlencoded
User-Agent: curl/8.15.0
X-Forwarded-For: 10.250.0.246
X-Forwarded-Host: webhook-tester.dev.***.com
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Scheme: https
X-Real-Ip: 10.250.0.246
X-Request-Id: 785ed2fd56e1d2bfae0a080fc3733fb6
X-Scheme: https
Log:
ingress-nginx-controller-xv5b6 controller 10.96.0.114 - - [04/Nov/2025:09:44:05 +0000] "POST /5fa6cb63-3138-4ce1-b65a-4ebe87c9e363 HTTP/2.0" 200 18 "-" "curl/8.15.0" 119 0.002 [webhook-tester-webhook-tester-8081] [] 10.96.1.114:8080 18 0.001 200 5d4dd4579c03798ea4875dbbcdfbff0b
Anything else we need to know:
Metadata
Metadata
Assignees
Labels
Type
Projects
Status