Are you trying to play around, or set up a working cluster? If you just want to play around, I'd suggest just using minikube to get things going.
Anecdotally, I got an HA cluster running across 3 boxes in the space of about a month, with maybe 2-3 hours a day spent on it. The key for me was iterating, and probably that I have good experience with infrastructure in general. I started out with a single, insecure machine, added workers, then upgraded the workers to masters in an HA configuration.
I don't think it is really that hard to get a cluster going if you have some infrastructure and networking experience, especially if you start with low expectations and just tackle one thing at a time incrementally.
Full Disclosure: I work for Red Hat in the Container and PaaS Practice in Consulting.
At Red Hat, we define an HA OpenShift/Kubernetes cluster as 3x3xN (3 masters, 3 infra nodes, 3 or more app nodes) [0] which means the API, etcd, the hosted local Container Registry, the Routers, and the App Nodes all provide (N-1)/2 fault tolerance.
Not to brag, since we're well practiced at this, but I can get a 3x3x3 cluster in a few hours, I've lead customer to a basic 3x3x3 install (no hands on keyboard) in less than 2 days, and our consultants are able to install a cluster in 3-5 working days about 90% of the time, even with impediments like corporate proxies, wonky DNS or AD/LDAP, not so Enterprise Load Balancers, and disconnected installs. Making a cluster read for production is about right-sizing and doing good testing.
Worth mentioning that my "got a cluster working in a month" time frame includes starting with zero Kubernetes experience, and no etcd ops experience. Using kops, pretty much anybody can get a full HA cluster running in about 15 minutes. On top of that, it's maybe 5 more minutes to deploy all the addons you'd expect for running production apps on a cloud-backed cluster.
The great thing about automation is that once you have these basic tools (Prom/Graf monitoring/alerting, ELK, node pool autoscaling, CI/CD) implemented as declarative manifests, they're deployable anywhere in minutes.
would be good if the "Enterprise Load Balancer" would just be another set of servers (with HAProxy + keepalived or something else, I love the "single ip" failover)
Edit: especially load balacing the master servers. (that's actually the hard part of k8s, not even setting it up with/out openshift/ansible whatever)
load balancing services on k8s itself is basically just running either calico network and use one or two haproxy deployments of size 1 with a ip annotation or just using https://github.com/kubernetes/contrib/tree/master/keepalived...
I'm trying to set up a cluster in our development environment to play around with in preparation for rolling it to staging and production. So, minikube I have ruled out because it doesn't prove out the most critical parts of what we will need to run it in production.
I do have a lot of infrastructure and networking experience, it was mostly a matter of the ingress setup having many moving parts which were poorly documented. I could see that it had set up bridges and iptables rules and NAT and virtual interfaces, but I was never able to get a picture of how the setup was supposed to work to be able to see what parts of that picture were right or wrong.
There was no clear road-map of setting up a cluster. Most people talking about Kubernetes were doing "toy" deployments, which only had limited application to what I was doing. I only found kubespray because of a passing mention, for example.
I'd say your about right with a month. Had I given it another week or two, I probably would have gotten it going. I had really only expected it to take a couple days to have a proof of concept cluster, so at 2 weeks I was way beyond what I had slotted to spend on it.
Looking over the Getting Started Guide it looks very simple to get a test cluster set up. Which maybe set my expectations unreasonably high.
I guess that's what I'm trying to say: With the current state of documentation, it's probably a calendar month investment to get going.
Anecdotally, I got an HA cluster running across 3 boxes in the space of about a month, with maybe 2-3 hours a day spent on it. The key for me was iterating, and probably that I have good experience with infrastructure in general. I started out with a single, insecure machine, added workers, then upgraded the workers to masters in an HA configuration.
I don't think it is really that hard to get a cluster going if you have some infrastructure and networking experience, especially if you start with low expectations and just tackle one thing at a time incrementally.