Commiting ch7 work

master
Drew Bednar 4 years ago
parent d3e4b11ea4
commit 668d313ccf

@ -4,7 +4,6 @@
To make it easier on you copy your kube config file to the root of this project and name it kubeconfig.conf. Then just remember to run source `scripts/profile` to start using the kubectl command. I am just setting the KUBECONFIG envar to the kubeconfig.conf file for the bash session.
## Accessing a pod from your laptop
Portforwarding
@ -26,6 +25,7 @@ kubectl logs -f <pod name>
```
To view logs from the previous pod. Useful if the pod instances keep restarting.
```
kubectl logs --previous <pod_name>
```
@ -49,6 +49,7 @@ kubectl exec -it <pod name> <cmd>
## Copying files to and from a running container
This is gnerally an anti-pattern. You should be treating the contents of a container as immutable.
```
kubectl cp <pod name>:<path> <host path>
@ -73,3 +74,121 @@ If we wanted to limit cpu you could use
```
docker run -d --name kuard -p 8080:8080 --memory 200m --memory-swap 1G --cpu-shares 1024 docker1.runcible.io:5151/kuard:latest
```
## Exposing a service
### Legacy (1.17?) way
THIS APPARENTLY IS THE LEGACY WAY OF DOING IT.
Start by creating a deployment
```
kubectl run alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=3 \
--port=8080 \
--labels="ver=1,app=alpaca,env=prod"
```
Then expose the deployment with a Service
```
kubectl expose deployment alpaca-prod
```
Then check on your service
```
kubectl get services -o wide
```
Consider adding a readiness check to the deployment. This will be used by the service to only forward traffic
to ready services. You can watch the endpoints used by the service (and watch containers removed from a service)
with:
```
kubectl get endpoints alpaca-prod --watch
```
#### The new way
Note: kubectl create deployment doesn't support `--labels=` keyword for some dumb fucking reason.
Create the deployment
```
kubectl create deployment alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=3 \
--port=8080 \
```
Label it and the pods
```
kubectl label deployment env=prod ver=1
```
```
kubectl label pod --selector=app=alpaca-prod env=prod ver=1
```
Expose the service while also defining the selector
```
kubectl expose deployment --type=NodePort --selector="app=alpaca-prod,ver=1,env=prod"
```
Then check on your service
```
kubectl get services -o wide
```
Consider adding a readiness check to the deployment. This will be used by the service to only forward traffic
to ready services. You can watch the endpoints used by the service (and watch containers removed from a service)
with:
```
kubectl get endpoints alpaca-prod --watch
```
### Accessing the exposed service
A cheap way in dev is just to use port forwarding
```
ALPACA_PROD=$(kubectl get pods -l app=alpaca -o jsonpath='{items[0].metadata.name}')
kubectl port-forward $ALPACA_PROD 48858:8080
```
Another potentially production capable alternative is to use a NopePort type. This will open a port on all workers
that will forward traffic to the service.
Option 1: Expose as NodePort
```
kubectl expose deployment --type=NodePort alpaca-prod
```
Option 2: Modify Service switching to NodePort
```
kubectl edit service alpaca-prod
```
change the `spec.type` field to NodePort and save.
check the port it is being served under:
```
kubectl describe service alpaca-prod
```
## LoadBalancer Services
If the cloud environment supports it you should be able to edit the `spec.type` to us `LoadBalancer`.
This builds on top of `NodePort` and your cloud provider create a new load balancerand direct it at
nodes in your cluster. This should eventually assign an EXTERNAL-IP with a public IP(or hostname)
assigned by the cloud vendor.

@ -0,0 +1,26 @@
# Hexapod
## Resouces or Inspiration
The pheonix code stuff is kind of old but look up this guy and his repos supports a quadmode also:
https://github.com/KurtE/Arduino_Phoenix_Parts
See also https://www.robotshop.com/community/robots/show/interbotix-phantomx-hexapod
[Hiwonder SpiderPi](https://www.hiwonder.hk/products/robosoul-spiderpi-ai-intelligent-visual-hexapod-robot-powered-by-raspberry-pi). Uses python, but fuck if I can find the code.
### Markwtech
A very nice project. Available on thingiverse an
https://markwtech.com/robots/hexapod/
https://www.thingiverse.com/thing:3463845?collect
## Gates
tripod, ripple, wave walking gates
adaptive gate to walk on uneven terrain. The interrobotics guys did this with the robotis api.
Translate and rotate in place

@ -0,0 +1,55 @@
apiVersion: v1
kind: Pod
metadata:
name: kuard
labels:
app: kuard
version: dirp
spec:
volumes:
- name: "kuard-data"
hostPath:
path: "/var/lib/kuard"
# - name: "kuard-data"
# nfs:
# server: "my.nfs.server.local"
# path: "/exports"
containers:
- image: gcr.io/kuar-demo/kuard-amd64:blue
name: kuard
volumeMounts:
- mountPath: "/data"
name: "kuard-data"
livenessProbe:
httpGet:
path: /healthy
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: "/ready"
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
requests:
cpu: "500m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "256Mi"

@ -0,0 +1,27 @@
apiVersion: v1
kind: Pod
metadata:
name: kuard
spec:
volumes:
- name: "kuard-data"
hostPath:
path: "/var/lib/kuard"
containers:
- image: gcr.io/kuar-demo/kuard-amd64:blue
name: kuard
volumeMounts:
- mountPath: "/data"
name: "kuard-data"
livenessProbe:
httpGet:
path: /healthy
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
ports:
- containerPort: 8080
name: http
protocol: TCP

@ -0,0 +1,19 @@
##
## Deployments
## Services
## Node Ports
## End Points
The "buddy" of a service. Contains the IP addresses for that service
```
kubectl describe endpoints <service name>
```
## Ingress
Multiple ingress objects are merged together into a single config for K8s internal HTTP Load balancing system.
Loading…
Cancel
Save