项目作者: justindav1s

项目描述 :
Kubernetes on Vagrant with VirtualBox and Centos7
高级语言: Shell
项目地址: git://github.com/justindav1s/k8s_vagrant.git
创建时间: 2020-02-09T17:13:17Z
项目社区:https://github.com/justindav1s/k8s_vagrant

开源协议:Apache License 2.0

下载


Kubernetes on with Vagrant, Virtualox and Centos7

This repo deploys kubernetes into a cloud composed of Vagrant, VirtualBox and Centos 7.

As currently configured it spins up 7 vms, a load-balancer, 3 controllers and 3 workers. As currently configured it uses 36GB of RAM, but could be confiured to use much less. See the settings in cluster/vagrantFile.

  • bin : scripts that start the cloud and deploy Kubernetes
    • cluster.sh [up|halt|destroy …..] : spin up the cloud
    • deploy.sh : deploy kubernetes
    • snapshot_create.sh : create a vitrulbox snapshot
    • dashboard port-forwarding
  • cluster : contains vagrant and script resources to to spin up the VM cluster, with static IP addresses
    • 192.168.20.10 : haproxy load-balancer and dnsmasq DNS server
    • 192.168.20.[11-13] : kubernetes masters hosting the API
    • 192.168.20.[21-23] : kubernetes workers that hosts pods with cri-o
  • cluster/init_scripts : scripts that initialise each VM appropriately at start up
  • kubernetes/masters : scripts to setup and check the status of the kubernetes control plane
    • Flanel deploy
  • kubernetes/workers : scripts to setup and check the status of the kubernetes workers
    • dashboard deploy
  • lb : script to check the status of the loadbalancer for the kubernetes API
  • ingress : contains script to configure nginx ingress

Flannel on Vagrant

Flannel needs to use the host-only adapter eth1.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#default-nic-when-using-flannel-as-the-pod-network-in-vagrant

  1. kubectl patch daemonset kube-flannel-ds-amd64 -n kube-system \
  2. --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/3", "value": "--iface=eth1"}]'

https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

Dashboard Portforwarding

  1. kubectl --kubeconfig=admin.config port-forward \
  2. $(kubectl --kubeconfig=admin.config get pods -l k8s-app=kubernetes-dashboard -o jsonpath="{.items[0].metadata.name}" -n kubernetes-dashboard) \
  3. --address 127.0.0.1 8443:8443 \
  4. -n kubernetes-dashboard

HTTP ingress

  1. kubectl -n kubernetes-dashboard apply -f - <<EOF
  2. apiVersion: extensions/v1beta1
  3. kind: Ingress
  4. metadata:
  5. name: frontend
  6. spec:
  7. rules:
  8. - host: frontend.192.168.20.10.xip.io
  9. http:
  10. paths:
  11. - backend:
  12. serviceName: frontend
  13. servicePort: 80
  14. path: /
  15. EOF

HTTPS ingress

  1. openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=192.168.20.10.nip.io"
  2. kubectl --namespace default create secret tls tls-secret --cert=tls.crt --key=tls.key
  3. kubectl -n default apply -f - <<EOF
  4. kind: Ingress
  5. apiVersion: extensions/v1beta1
  6. metadata:
  7. name: frontend
  8. namespace: default
  9. spec:
  10. tls:
  11. - hosts:
  12. - frontend.192.168.20.10.nip.io
  13. secretName: tls-secret
  14. rules:
  15. - host: frontend.192.168.20.10.nip.io
  16. http:
  17. paths:
  18. - path: /
  19. backend:
  20. serviceName: frontend
  21. servicePort: 80
  22. EOF