On the nginx-ingress sunset
The Ingress NGINX Controller is a goner. 🪦 March 2026, no more releases or security patches. So if you’re still running it, you need to plan your migration. The naming doesn’t help: nginx-ingress, ingress-nginx, and the Ingress API are three different things; blink once and you’ve missed which of the confusing products is actually getting sunset. Ultimately, the sense of urgency is real, you need this work, planning and execution, completed by March. This post tries to untangle the mess and make some recommendations along the way.
The Confusion: What is actually going away?
I wish I was joking about having to make the distinction.
There are three distinct entities that often get confused:
- Ingress NGINX Controller (
kubernetes/ingress-nginx): The Kubernetes project implementation. This is the one being retired. No more releases or security fixes after March 2026. - F5 NGINX Ingress Controller: A different product, maintained by F5. It is not being sunset.
- The Ingress API (
kind: Ingress): The Kubernetes API itself. It is not going away … yet.
The Kubernetes Steering and Security Response Committees1 communicated clearly; the retirement announcement2 spelled out the timeline. Existing deployments will keep working for a while, so unless you proactively check, you may not realise you’re affected until you’re compromised. Most Kubernetes deployments use this controller.
Stick to the Ingress API (for now)
The Ingress API3 itself isn’t going away … yet. It won’t get new features, but it’ll continue to receive security support. In my opinion that should drive your priorities. Staying on the Ingress CR and swapping the controller to another Ingress implementation is a good first step. There’s a lot of risk in also migrating to Gateway API in one go. Get off Ingress NGINX first, onto something that still speaks the Ingress API. You can revisit Gateway API later as a separate project.
Snippets and other warcrimes
If you’re using nginx snippets, you should never have. That kind of flexibility is one of the reasons Ingress NGINX is going away, and it’s the same story for the Ingress CR: the ecosystem is moving away from arbitrary configuration injection. Risks from that flexibility were identified early. Remember CVE-2021-257424, where custom snippets in ingress-nginx could be used to obtain all secrets in the cluster? What about its baby brother CVE-2021-257455, where the Ingress path field could be abused to obtain the controller’s credentials (and thus all secrets in the default config)?
Those are the kind of risks that come with arbitrary unvalidated configuration passed to a controller. Removing those shortcuts is the debt you won’t be able to skip paying today.
What do? Your best option in the short term is likely an nginx sidecar. Run NGINX next to the app in the same pod, and let the Ingress controller just route to the service. You can rework your architecture and solutions later if performance is provably a concern. Don’t over-engineer the migration, for most commercial use-cases the extra milliseconds of latency won’t kill your SLO. It has some benefits too: it makes whatever concern you had in that annotation explicit. That’s going to help you later when you need to relocate that concern and find it a clear owner.
🙌 Now, I know it might sound like an icky suggestion. But hey, if it works …
What I did at home, a case-study
I’ve already done the migration on my personal Kubernetes cluster. I replaced Ingress NGINX with Cilium Ingress, which is backed by Envoy. It worked for 95% of my use-cases. The main behavioural difference I experienced was path matching.
With Envoy you should inspect every Ingress custom resource you migrate and
make sure its path rules use pathType: Prefix where possible. If you leave the
path type ImplementationSpecific or use something other than Prefix, an
Envoy-backed controller will treat the path as a regex, and it’s not the nice
PCRE2 you might expect.
I still have one stubborn application that just won’t work with Envoy yet, and I’ve isolated it using network policies for now.
Again, I want to hammer this part in: picking a new ingress controller is half the battle, dealing with things like this is the other half.
Gateway API, not yet
You’ll hear a lot about the Gateway API. It’s a better long-term direction, but
it’s rich and layered, and will require careful design to roll out in
your organization. All the simple things you took for granted will become
strategies that require careful planning, nesting multiple services under a
single or multiple addresses becomes non-trivial. You’ll have to completely
rethink how you do TLS termination, secrets don’t reside in the same location
and you’ll need explicit cross-namespace allow-list rules. You’ll discover that
the Gateway API is fundamentally made for layering APIs on a unified space, the
implication is that vHost style ingresses serving multiple domain names is a bit
of a second class citizen that requires more wiring. Another innocuous thing I
discovered is that the equivalent of
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" in this new world is a
multi-day project with CA bundles attached to Gateway resources.
Everything becomes more explicit, and everything explicit is real new work.
If you have tenants to which you need to expose these changes and your Helm chart or platform-as-a-service game isn’t top notch, it’s going to be a challenge to communicate, let alone implement. Those are all excellent changes that make sense, and the new design provides you with excellent operational tools long term, but it’s a lot of things to adapt to at once. It’s a lot of things that need to be designed properly, especially if you’re handing them off to someone else.
Complexity compounds.
There’s no need for you to migrate to the Gateway API immediately. “Divide and Conquer” has been trendy since about the 4th century BC. It’s likely still a good strategy today.
-
Kubernetes Steering Committee & Kubernetes Security Response Committee. (2026, January 29). Ingress NGINX: Statement from the Kubernetes Steering and Security Response Committees. Kubernetes Blog. https://kubernetes.io/blog/2026/01/29/ingress-nginx-statement/ ↩︎
-
Sable, T. (2025, November 11). Ingress NGINX Retirement: What You Need to Know. Kubernetes Blog. https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/ ↩︎
-
The Kubernetes Authors. (n.d.). Ingress. Kubernetes Documentation. https://kubernetes.io/docs/concepts/services-networking/ingress/ ↩︎
-
National Vulnerability Database. (2021). CVE-2021-25742: A security issue was discovered in ingress-nginx where a user that can create or update ingress objects can use the custom snippets feature to obtain all secrets in the cluster. NIST. https://nvd.nist.gov/vuln/detail/cve-2021-25742 ↩︎
-
National Vulnerability Database. (2022). CVE-2021-25745: A security issue was discovered in ingress-nginx where a user that can create or update ingress objects can use the spec.rules[].http.paths[].path field to obtain the credentials of the ingress-nginx controller; in the default configuration, that credential has access to all secrets in the cluster. NIST. https://nvd.nist.gov/vuln/detail/cve-2021-25745 ↩︎