ac089e1c69 | 3 years ago | |
---|---|---|
charts | 3 years ago | |
docker_image_ex | 4 years ago | |
helm/labwork | 3 years ago | |
kuard | 4 years ago | |
manifests | 3 years ago | |
networking | 3 years ago | |
scripts | 3 years ago | |
.gitignore | 4 years ago | |
README.md | 3 years ago | |
objects.md | 4 years ago |
README.md
Kubernetes up and Running
Setting your kubeconfig
To make it easier on you copy your kube config file to the root of this project and name it kubeconfig.conf. Then just remember to run source scripts/profile
to start using the kubectl command. I am just setting the KUBECONFIG envar to the kubeconfig.conf file for the bash session.
Accessing a pod from your laptop
Portforwarding
kubectl port-forward <pod name> <port>:<pod_port>
Basic logging
kubectl logs <pod name>
Stream logs with -f
kubectl logs -f <pod name>
To view logs from the previous pod. Useful if the pod instances keep restarting.
kubectl logs --previous <pod_name>
Running commands in your container with exec
One off commands
kubectl exec <pod name> <cmd>
kubectl exec kuard date
Interactive sessions
kubectl exec -it <pod name> <cmd>
Copying files to and from a running container
This is gnerally an anti-pattern. You should be treating the contents of a container as immutable.
kubectl cp <pod name>:<path> <host path>
kubectl cp <host path> <pod name>:<path>
Demo app
https://github.com/kubernetes-up-and-running/kuard
Here is an example of using the cgroups access to limit container resources to 200mb of ram and 1G of swap. If these resources are exceed, which you can test with the kuard application, the container will be terminated.
docker run -d --name kuard -p 8080:8080 --memory 200m --memory-swap 1G docker1.runcible.io:5151/kuard:latest
If we wanted to limit cpu you could use
docker run -d --name kuard -p 8080:8080 --memory 200m --memory-swap 1G --cpu-shares 1024 docker1.runcible.io:5151/kuard:latest
Exposing a service
Legacy (1.17?) way
THIS APPARENTLY IS THE LEGACY WAY OF DOING IT.
Start by creating a deployment
kubectl run alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=3 \
--port=8080 \
--labels="ver=1,app=alpaca,env=prod"
Then expose the deployment with a Service
kubectl expose deployment alpaca-prod
Then check on your service
kubectl get services -o wide
Consider adding a readiness check to the deployment. This will be used by the service to only forward traffic to ready services. You can watch the endpoints used by the service (and watch containers removed from a service) with:
kubectl get endpoints alpaca-prod --watch
The new way
Note: kubectl create deployment doesn't support --labels=
keyword for some dumb fucking reason.
Create the deployment
kubectl create deployment alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=3 \
--port=8080 \
Label it and the pods
kubectl label deployment env=prod ver=1
kubectl label pod --selector=app=alpaca-prod env=prod ver=1
Expose the service while also defining the selector
kubectl expose deployment --type=NodePort --selector="app=alpaca-prod,ver=1,env=prod"
Then check on your service
kubectl get services -o wide
Consider adding a readiness check to the deployment. This will be used by the service to only forward traffic to ready services. You can watch the endpoints used by the service (and watch containers removed from a service) with:
kubectl get endpoints alpaca-prod --watch
Accessing the exposed service
A cheap way in dev is just to use port forwarding
ALPACA_PROD=$(kubectl get pods -l app=alpaca -o jsonpath='{items[0].metadata.name}')
kubectl port-forward $ALPACA_PROD 48858:8080
Another potentially production capable alternative is to use a NopePort type. This will open a port on all workers that will forward traffic to the service.
Option 1: Expose as NodePort
kubectl expose deployment --type=NodePort alpaca-prod
Option 2: Modify Service switching to NodePort
kubectl edit service alpaca-prod
change the spec.type
field to NodePort and save.
check the port it is being served under:
kubectl describe service alpaca-prod
LoadBalancer Services
If the cloud environment supports it you should be able to edit the spec.type
to us LoadBalancer
.
This builds on top of NodePort
and your cloud provider create a new load balancerand direct it at
nodes in your cluster. This should eventually assign an EXTERNAL-IP with a public IP(or hostname)
assigned by the cloud vendor.
Ingress
Users must install thier own Ingress controller. Controllers are "Plugable". You can use a software(Nginx, Envoy), hardware(F5?), or cloud (ELB) load balancer as the Ingress controller.
So Ingress is split into two sections:
Ingress Spec - The set of objects you write.
Ingress Controler - The component that acts on the spec objects. Generally needs to translate the spec into the expected config for the chosen controller.
This book uses the Contour(Envoy under the hood) project.
I have elected to just use the comparable command found in the book. There are other options like an Operator and Helm chart.
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
I found the namespace with kk get namespaces
which turned out to be projectcontour
.
drewbednar@eisenhorn learn_k8s % kk get pods -n projectcontour
NAME READY STATUS RESTARTS AGE
contour-7dd74cc485-sg9tv 1/1 Running 0 3m23s
contour-7dd74cc485-t4j57 1/1 Running 0 3m23s
contour-certgen-v1.16.0-jtnqg 0/1 Completed 0 3m24s
envoy-762hj 2/2 Running 0 3m23s
envoy-v5nt4 1/2 Running 0 3m23s
drewbednar@eisenhorn learn_k8s % kk get -n projectcontour services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.103.217.239 <none> 8001/TCP 4m15s
envoy LoadBalancer 10.96.170.238 <pending> 80:31325/TCP,443:31149/TCP 4m15s
Need a baremetal load balancer
Well fuck looks like since I am on bare metal I don't have a Load Balancer type. MetalLB is what DR uses so...I guess that's what I am going to implement.
Following the instructions here https://metallb.universe.tf/installation/.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
This will give us a metalLB system in the namespace metallb-system that is not yet configured. There was a forum post [Unifi bgp config] that mentioned this but I think I am just going to use L2 config.
Since my network is 10.0.1.1 with subnet mask of 255.255.252.0 and my DHCP range starts at 10.0.1.50 I can set addresses in the block 10.0.1.40-10.0.1.45 like so:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.0.1.40-10.0.1.45
kubectl apply -f metallb-l2-config.yaml
and we can check on it
kubectl get configmap -n metallb-system config -o yaml
Finally we are back in business
drewbednar@eisenhorn networking % kk get services -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.103.217.239 <none> 8001/TCP 163m
envoy LoadBalancer 10.96.170.238 10.0.1.40 80:31325/TCP,443:31149/TCP 163m
Configuring DNS (or something close)
Since I do not have DNS in my lab environment I achieve the same kind of routing to LoadBalancer using the /etc/host of my router on 10.0.1.1
10.0.1.40 alpaca.runcible.io bandicoot.runcible.io
Back to ingress
In the networking folder we have simple-ingress.yaml
kubectl apply -f networking/simple-ingress.yaml
verified with
kubectl get ingress
In detail
kubectl describe ingress simple-ingress
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: simple-ingress
Namespace: default
Address: 10.0.1.40
Default backend: alpaca:8080 (192.168.2.88:8080,192.168.9.160:8080,192.168.9.161:8080)
Rules:
Host Path Backends
---- ---- --------
* * alpaca:8080 (192.168.2.88:8080,192.168.9.160:8080,192.168.9.161:8080)
Annotations: <none>
Events: <none>
Using the Nginx Ingress Controller
These instructions were modified from https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal. The only thing I did was change the spec.type from NodePort to LoadBalancer.
curl -o networking/nginx-ingress-controller.yaml <url from above instructions>
kubectl apply -f networking/nginx-ingress-controller.yaml
This creates all the objects in the manifest. We can verify:
kubectl get services --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.99.4.82 10.0.1.40 80:31142/TCP,443:32269/TCP 2m43s
ingress-nginx-controller-admission ClusterIP 10.107.45.77 <none> 443/TCP 2m43s
Seeing that Metallb did allocate dan External-IP in it's configured range to the ingress controller.
Deleting the Nginx ingress controller
Just in case you want to run a different ingress controller or scroch the earth on your current Nginx controller you can delete it like so.
Note: You should double check with a get
command before running delete
on these.
kubectl delete namespace ingress-nginx
kubectl delete clusterrole ingress-nginx ingress-nginx-admission
kubectl delete clusterrolebinding ingress-nginx ingress-nginx-admission
DeamonSets
Appendix
Restart all the pods in a namespace
kubectl -n {NAMESPACE} rollout restart deploy
Monitor it with
kubectl get -n {NAMESPACE} pods -w
https://cloudowski.com/articles/a-recipe-for-on-prem-kubernetes/
https://metallb.universe.tf/
[unifi bgp config]: https://community.ui.com/questions/How-to-configure-BGP-routing-on-USG-Pro-4/ecdfecb5-a8f5-48a5-90da-cc68d054be11
Kubernetes Cluster Certificates
I set this cluster up over a year ago apparatenly and I can see that the certs on the master node have expired.
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text
Alternatively in Kubeadm 1.19 you can use
kubeadm alpha certs check-expiration
Following these instructions I generated new certs.
The renewal process
kubeadm alpha certs renew all
Then reboot. Check your certs again, then copy the /etc/kubernetes/admin.conf
back down for your use.
Helm commands
Helm info
helm list
Getting at the released templated manifests/values.
If you want to see the helm manifest in the K8s cluster you can get at it through secrets. Helm stores the release as a k8s secret as a base64 encoded gzip archive:
kubectl get secrets <secret name which is your release> -o jsonpath="{ .data.release }" | base64 -d | base64 -d | gunzip | json_pp
It's easier though to get at this info using helm
helm get manifest <release name>
helm get values <release name>
Running the Templates without releasing
Static command to test template. Works without K8s cluster
helm template [chart]
Dynamic way is a real helm install but without a commit.
Debug here outputs in standard error so you have to redirect
helm install [release] [chart] --debug -dry-run 2>&1 | less