Kubernetes cargo culting📄 OneThingWell.dev wiki page | 🕑 Last updated: Apr 28, 2023
Kubernetes is one the most common examples of cargo culting and overengineering in the tech industry today.
If you're working with the deployments on the scale of Google, then something like k8s makes a lot of sense. But those cases are rare, and k8s is often sold as a silver bullet, which is not - it can easily create way more problems than it could solve.
Rule of thumb: Unless you're absolutely sure that you need k8s, you probably don't need it.
A collection of comments and quotes that are relevant to this topic, interesting or notable in some other way:
What problem is k8s even trying to solve?
Say you want to deploy two Python servers. One of them needs Python 3.4 and the other needs Python 3.5.
Honestly hilarious. The core value prop example is if you want to run two minor version different programming languages on one machine. In order to do that, you get to deploy and configure hundreds to thousands of lines of yaml, learn about at least 20 different abstraction jargon terms, and continue to spend all your time supporting this mess of infrastructure forever.
How many engineering teams adopt kubernetes because it's what everyone's doing, versus out of genuine well-specified need? I have no idea.
I use k8s at work, i know it has benefits, but it too often feels like using a bazooka to butter your toast. We don't deploy to millions of users around the globe, and we're all on one python version (for library compatibility, amongst other things). Docker is an annoying curse more than it's a boon. How much of this complexity is because python virtualenvs are confusing? How much would be solved if instead of "containers" we deployed static binaries (that were a couple hundred mb's larger apiece because they contained all their dependencies statically linked in... but who's counting). Idk. Autoscheduling can be nice, but can also be a footgun. To boot, k8s does not have sane defaults, everything's a potential footgun.
In 10 years we're going to have either paved over all of the complexity, and k8s will be as invisible to us then as linux (mostly) is today. Or, we'll have realized this is insanity, and have ditched k8s for some other thing that solves a similar niche.
(top comment from the HN discussion on Solving Common Problems with Kubernetes)
Not sure why you get the downvotes, all I can imagine is that there are a lot of people here who learned on K8s and have never tried anything different, or had to manage infrastructure at scale (In my experience scale being 30k+ bare metal machines)
99.9% of the time I see a k8s implementation it's a shop pushing a ridiculously low amount of traffic and then needing to scale their application across decades-old hardware provided by a cloud provider. It's so bad that a single, modern 4U server, like I used to manage working at at a large company, could have replaced their entire infrastructure. The k8s users wind up with thousands of lines of YAML to solve a problem that could have been solved with better design decisions. The abstractions upon abstractions also prevent developers from truly understanding what is going on in the real world, for example your cloud provider's hypervisor doesn't align with your para-virtualization hypervisor so you end up with all sorts of issues with affinity and noisy neighbors on the same hardware that you can't even see, or your racks aren't splayed right so failures wipe out disproportionate amount of instances, that K8s then takes ages to rebalance things.
(from the HN discussion on Reasons Kubernetes is so complex)
Comments and suggestions
If you find this site useful in any way, please consider supporting it.