Back to Blog
kubernetes
security
k8s
containers
devops
cgroups
rbac
Why Kubernetes 1.35 Feels Like a Security-First Release
December 2, 2025
10 min read
NEW

RECOMMENDED READING
Stop Guessing Kubernetes: Clear Answers to the Most Confusing Questions Part I
A practical, no-nonsense guide to understanding Kubernetes the way it actually works. This book cuts through the noise and explains Kubernetes from first principles, using real-world questions engineers ask every day.
Kubernetes updates usually land with a familiar rhythm: a handful of quality-of-life tweaks, some behind-the-scenes cleanups, and maybe a flashy feature or two if the stars align. Version 1.35 breaks that pattern. This release feels different. It feels like the Kubernetes maintainers finally grabbed the security checklist that's been growing in the corner for years and said, alright, let's knock all of these out at once.
And people definitely noticed. Engineers digging through the changelog keep circling back to the same point: this release is subtly massive for anyone who cares about hardening their clusters… and a little nerve-wracking for anyone who's been depending on older behavior.
A breakdown of the changes circulating in community conversations points to something bigger than a typical incremental update. It's a release about raising the bar. Cleaning up legacy. Closing doors that stayed open longer than they should have. Making Kubernetes safer even if doing that makes some setups groan.
Let's walk through what's actually changing — and why this one feels like the most security-driven release Kubernetes has delivered in a long while.
## Security Changes That Might Break Your Day
A cluster upgrade isn't a scary thing until the phrase "this may break things" shows up. And 1.35 has a few of those.
### 1. Goodbye, cgroup v1 (#5573)
No surprise here — this has been in motion for a while — but Kubernetes officially dropping cgroup v1 is one of those moments that hits harder in production than you expect.
Some workloads have quietly relied on v1 quirks for years. The community chatter around this one is basically a mix of "thank goodness, it's time," and "if this breaks my setup, I'm switching platforms."
It's a big compatibility cliff, and people know it.
### 2. Enforcing Secret-Pulled Images (#2535)
This change tightens how Kubernetes handles images that depend on secrets for pulling. If your cluster had any fuzzy or inconsistent configurations here, the new enforcement will call those out fast.
It's the kind of fix that makes perfect sense from a security standpoint — letting images pull without the right secret checks was always a risky blind spot — but it's also the sort of thing that exposes dusty corners in CI/CD pipelines.
### 3. Transition from SPDY to WebSockets (#4006)
SPDY has been hanging around longer than anyone expected. Kubernetes finally moves its remaining dependencies toward WebSockets.
This one shouldn't explode clusters, but it definitely forces some operators to think about tooling, proxies, and older sidecars that still expected SPDY. WebSockets bring a more modern, more secure approach, but like most protocol shifts, it hits at awkward angles until everything catches up.
### 4. Hardened Kubelet Certificate Validation (#4872)
This one is less flashy but arguably one of the most important. The Kubernetes API server now tightens validation around Kubelet serving certificates.
Translation: fewer opportunities for bad identity assumptions, fewer loopholes for misconfigured nodes, and fewer attack paths for impersonation.
It's the type of security hardening that pays off over time — especially in large clusters where certificate handling occasionally drifts out of alignment.
## The New Security Features That Actually Change the Game
Beyond the breaking changes, Kubernetes 1.35 packs several new enhancements that push the platform into a more modern, more locked-down future.
### 1. Constrained Impersonation (#5284)
Impersonation has always been a double-edged sword. It's a necessary feature for tools, operators, and automation… but it's also one of the scariest permissions you can give to anything in your cluster.
The new constrained impersonation enhancement lets you limit who — or what — can impersonate which service accounts or user groups.
Think of it as finally putting seatbelts on a feature that should've had them all along.
### 2. Flagz for Kubernetes Components (#4828)
The short version: Kubernetes components gain the ability to expose their runtime flags in a structured, transparent way.
Why does this matter for security?
Because misconfigurations hide in flags, and being able to introspect them cleanly is one of the fastest ways to spot drift, diagnose weird behavior, or catch something that shouldn't be enabled.
In complex clusters, visibility is half the battle.
### 3. Allow HostNetwork Pods to Use User Namespaces (#5607)
This is a big deal for environments trying to isolate workloads more aggressively. User namespaces have been a long-requested feature for improving multi-tenant safety.
Letting HostNetwork pods tap into that isolation helps shrink the blast radius of privileged workloads.
It's not a magic shield, but it's progress — and progress is rare when dealing with network-level privileges.
### 4. CSI Driver Opt-In for SA Tokens via Secrets (#5538)
CSI drivers historically received service account tokens whether they needed them or not. You can guess why that was a problem.
Version 1.35 flips that around: drivers now have to opt in.
Least privilege by default — finally.
## Features Now Enabled by Default — and What They Mean for Security
Some enhancements have been sitting behind feature gates for ages, waiting for the moment Kubernetes would say, "Okay, everyone's ready." Version 1.35 flips the switch on a bunch of them.
### 1. Pod Certificates (#4317)
Certificate automation inside Kubernetes has always felt a bit like playing with wires that spark if you touch them wrong. Pod certificates make identity more first-class, simpler to automate, and harder to misuse.
A lot of folks have been waiting for this one to leave the experimental phase, and its promotion in 1.35 signals real maturity in workload identity.
### 2. User Namespaces for Pods (#127)
This is part of a larger arc: reducing the gap between root inside a container and root on the host.
User namespaces have been the missing puzzle piece for years, and enabling them by default moves Kubernetes toward a safer baseline for all workloads — even ones that weren't designed with strict isolation in mind.
### 3. OCI Artifacts / Images as VolumeSource (#4639)
This isn't strictly a security feature, but it has real implications. Allowing volumes to be sourced from OCI artifacts opens up cleaner, more consistent ways to distribute read-only content.
Fewer bespoke systems, fewer weird edge cases, and fewer opportunities for someone to smuggle something into your volume mounts.
### 4. Separate kubectl User Preferences from Cluster Configs (#3104)
This is a quiet but important shift. Mashing user preferences into cluster configs made it easier for mistakes or bad defaults to leak into production.
By separating them, Kubernetes reduces the risk of "oops, that wasn't supposed to happen" moments during deployments.
### 5. Removal of Gogo Protobuf Dependency (#5589)
Security and serialization libraries have an interesting relationship. Removing older dependencies — especially ones with long histories and mixed safety stories — is always a win.
### 6. Structured Authentication Config (#3331)
This enhancement tidies up how authentication settings are expressed, making mistakes harder to make and easier to spot.
Cleaner config = fewer surprises.
### 7. Fine-grained SupplementalGroups Control (#3619)
RBAC isn't the only place where access control matters. SupplementalGroups affect file permissions inside pods, and more granular control lets operators shape workload isolation in ways that match their security models more precisely.
### 8. Drop-in Kubelet Config Directory (#3983)
This one sounds boring until you've spent a weekend chasing down a misconfigured flag.
A proper drop-in configuration directory means predictable overrides, consistent updates, and fewer accidental changes.
Security loves predictability, and this gives you more of it.
## Why 1.35 Feels Like a Turning Point
There's a vibe around this release — that Kubernetes is stepping into a new era where security isn't a side conversation but the main storyline.
A lot of operators have wanted this shift. Kubernetes has grown up. The easy wins are gone. The platform is battle-tested, widely deployed, and now faces modern threats that weren't even on the radar when the project began.
Version 1.35 reads like a response to that reality.
It closes old doors, replaces aging assumptions, and puts more power in the hands of administrators who want tighter control. Even the changes that risk breaking production do so with a clear goal: safer defaults, better isolation, fewer pathways for things to go wrong quietly.
Is it perfect? Of course not. Any release that touches this many sensitive areas is going to cause some headaches. Someone out there will absolutely upgrade and then shout into a void when something unexpected breaks. That's part of running Kubernetes at scale — the tradeoffs never stop.
But taken as a whole, Kubernetes 1.35 feels honest about the platform's future. We're moving into a world where workloads need strong identity, containers need isolation that actually means something, and infrastructure teams can't afford legacy behaviors that come with invisible risks.
This release doesn't just acknowledge that shift.
It leans into it.
Keep Exploring
Why Kubernetes Still Doesn't Natively Support Live Container Migration (And Why It Should)
Kubernetes has mastered orchestration, but still lacks native live container migration. Explore why this feature is missing, how CAST AI is changing the game with CRIU, and why it's time for K8s to catch up.
It Works... But It Feels Wrong - The Real Way to Run a Java Monolith on Kubernetes Without Breaking Your Brain
A practical production guide to running a Java monolith on Kubernetes without fragile NodePort duct tape.
Kubernetes Isn’t Your Load Balancer — It’s the Puppet Master Pulling the Strings
Kubernetes orchestrates load balancers, but does not replace them; this post explains what actually handles production traffic.
Should You Use CPU Limits in Kubernetes Production?
A grounded take on when CPU limits help, when they hurt, and how to choose based on workload behavior.