You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Drew Bednar 668d313ccf Commiting ch7 work 4 years ago
docker_image_ex initial commit 4 years ago
kuard initial commit 4 years ago
manifests Commiting ch7 work 4 years ago
scripts making it a little more convient 4 years ago
.gitignore making it a little more convient 4 years ago
README.md Commiting ch7 work 4 years ago
objects.md Commiting ch7 work 4 years ago

README.md

Kubernetes up and Running

Setting your kubeconfig

To make it easier on you copy your kube config file to the root of this project and name it kubeconfig.conf. Then just remember to run source scripts/profile to start using the kubectl command. I am just setting the KUBECONFIG envar to the kubeconfig.conf file for the bash session.

Accessing a pod from your laptop

Portforwarding

kubectl port-forward <pod name> <port>:<pod_port>

Basic logging

kubectl logs <pod name>

Stream logs with -f

kubectl logs -f <pod name>

To view logs from the previous pod. Useful if the pod instances keep restarting.

kubectl logs --previous <pod_name>

Running commands in your container with exec

One off commands

kubectl exec <pod name> <cmd>


kubectl exec kuard date

Interactive sessions

kubectl exec -it <pod name> <cmd>

Copying files to and from a running container

This is gnerally an anti-pattern. You should be treating the contents of a container as immutable.

kubectl cp <pod name>:<path> <host path>
kubectl cp <host path> <pod name>:<path>

Demo app

https://github.com/kubernetes-up-and-running/kuard

Here is an example of using the cgroups access to limit container resources to 200mb of ram and 1G of swap. If these resources are exceed, which you can test with the kuard application, the container will be terminated.

docker run -d --name kuard -p 8080:8080 --memory 200m --memory-swap 1G docker1.runcible.io:5151/kuard:latest

If we wanted to limit cpu you could use

docker run -d --name kuard -p 8080:8080 --memory 200m --memory-swap 1G --cpu-shares 1024 docker1.runcible.io:5151/kuard:latest

Exposing a service

Legacy (1.17?) way

THIS APPARENTLY IS THE LEGACY WAY OF DOING IT.

Start by creating a deployment

kubectl run alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=3 \
--port=8080 \
--labels="ver=1,app=alpaca,env=prod"

Then expose the deployment with a Service

kubectl expose deployment alpaca-prod

Then check on your service

kubectl get services -o wide

Consider adding a readiness check to the deployment. This will be used by the service to only forward traffic to ready services. You can watch the endpoints used by the service (and watch containers removed from a service) with:

kubectl get endpoints alpaca-prod --watch

The new way

Note: kubectl create deployment doesn't support --labels= keyword for some dumb fucking reason.

Create the deployment

kubectl create deployment alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=3 \
--port=8080 \

Label it and the pods

kubectl label deployment env=prod ver=1
kubectl label pod --selector=app=alpaca-prod env=prod ver=1

Expose the service while also defining the selector

kubectl expose deployment --type=NodePort --selector="app=alpaca-prod,ver=1,env=prod"

Then check on your service

kubectl get services -o wide

Consider adding a readiness check to the deployment. This will be used by the service to only forward traffic to ready services. You can watch the endpoints used by the service (and watch containers removed from a service) with:

kubectl get endpoints alpaca-prod --watch

Accessing the exposed service

A cheap way in dev is just to use port forwarding

ALPACA_PROD=$(kubectl get pods -l app=alpaca -o jsonpath='{items[0].metadata.name}')
kubectl port-forward $ALPACA_PROD 48858:8080

Another potentially production capable alternative is to use a NopePort type. This will open a port on all workers that will forward traffic to the service.

Option 1: Expose as NodePort

kubectl expose deployment --type=NodePort alpaca-prod

Option 2: Modify Service switching to NodePort

kubectl edit service alpaca-prod

change the spec.type field to NodePort and save.

check the port it is being served under:

kubectl describe service alpaca-prod

LoadBalancer Services

If the cloud environment supports it you should be able to edit the spec.type to us LoadBalancer. This builds on top of NodePort and your cloud provider create a new load balancerand direct it at nodes in your cluster. This should eventually assign an EXTERNAL-IP with a public IP(or hostname) assigned by the cloud vendor.