tryina get my head around ingresses and also at the same time devise some sort of back channel admin access to a k8s cluster has been an interersing challenge.
firstly:
ingress resources r cool
i definitely need to spend more time reading up on them and playing around. vulcan? envoy? linkerd? i truly have no idea :/
however:
right now, we’re talking back chans. alleyz.
we have:
- 1 jumpbox w a floating ip
- 2 k8s api nodes w only LAN access
there are a couple of ingresses
on the front-end but, due to ntwking requirements, we’re goin w access through the internal floating ip
network. so, alas, ingresses 4 anotha time
oddly enough, i wasn’t able to find any information about others doing this online. everything i could find suggested using an ingress or leveraging an lbaas from yr cloud provider. these aren’t options so what shall we do?
idea:
let’s just see if see if this could werk: installing nginx everywhere!
meaning, nginx proxy
on the jumpbox
(which has a floating ip
accessible only internally!) and on the API servers
(x2).
for now, NO SECURITY: (nginx manifest is on the api svr) :
server {
listen 80;
server_name kub.com;
location / {
proxy_pass http://localhost:8080;
}
}
and on ze jumpbox
(w some jinja2
templating 4 ansible
obvs):
server {
listen 80
server_name kub.com
location / {
proxy_pass http://{{ hostvars[groups['kube-master'][0]]['ansible_eth0']['ipv4']['address'] }}:80;
}
o yea you gotta tell selinux to allow it too:
$ semanage permissive -a httpd_t
now 4 security:
so now, if we try to hit the secure api port from the jumpbox, we get this:
$ curl https://192.168.128.10:6443
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
ez enough. let’s get the certs from one of the masters & put it in the jumpbox:
$ ssh -i ~/.ssh/id_rsa centos@master.kub.com sudo cat /etc/kubernetes/ssl/ca.crt > /etc/pki/ca-trust/certs/anchors/
$ update-ca-trust
ok so now we can curl the api server’s endpoint from the jumpbox without any noise about certs:
$ curl https://192.168.128.10:6443
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
great! we can get rid of the nginx proxy on the master now. no need 2 loop back round 2 localhost:8080
.
so what we have now is:
| jumpbox:80 | -> | master:6443 |
we still need 2 setup some kind of authentication. let’s go w basic auth
as it’s the most …basic:
- name: make required dir
file:
state: directory
path: "{{ item }}"
with_items:
- /etc/kubernetes/tokens
- /etc/kubernetes/users
- /etc/kubernetes/manifests/roles
- name: write known users file
template:
src: known_users.csv.j2
dest: /etc/kubernetes/users/known_users.csv
- name: write the roles manifest
template:
src: role.yml.j2
dest: /etc/kubernetes/manifests/roles/{{ item.name }}.yml
with_items:
- "{{ roles }}"
- block:
- name: apply the role-binding manifest
command: |
/usr/local/bin/kubectl apply -f /etc/kubernetes/manifests/roles/{{ item.name }}.yml
with_items:
- "{{ roles }}"
run_once: true
and the k8s manifest
for the role:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: {{ item.namespace }}
name: {{ item.name }}
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: {{ item.namespace }}
name: {{ item.name }}-binding
subjects:
{% for user in item.users %}
- kind: User
name: {{ user.username }}
apiGroup: ""
{% endfor %}
roleRef:
kind: Role
name: {{ item.name }}
apiGroup: ""
aaand the seed vars:
---
roles:
- name: kubmaster
namespace: default
users:
- username: kubmaster
password: kubmasterpass
we can get the token by just base64 encoding kubemaster:kubemasterpass
.
$ cat authstring
kubemaster:kubemasterpass
$ cat authstring | base64
a3ViZW1hc3RlcjprdWJlbWFzdGVycGFzcw==
and then testing it with a curl
from the jumpbox
:
$ curl https://192.168.128.10:6443 -H "Authorization: Basic 'a3ViZW1hc3RlcjprdWJlbWFzdGVycGFzcw=='"
so what we have now is port 80
open on our jumpbox
(unencrypted) -> proxies to port 6443
load-balanced on the masters
all templated up w jinja2
4 use w ansible
as ususal:
upstream kub.com {
{% for item in groups['kube-master'] %}
server {{ hostvars[item]['access_ip'] }}:6443;
{%- endfor %}
}
server {
listen 80;
server_name kub.com;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass https://kub.com;
proxy_pass_request_headers on;
proxy_set_header 'Authorization' "Basic {{ creds | b64encode }}";
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}