5.5 Environment, Configuration & Security
5.5.1 Namespaces
Namespaces allow to organize resources in the cluster, which makes it more overseeable when there are multiple resources for different needs. Maybe we want to organize by team, department, or according to a development environment (dev/prod), etc. By default, K8s will use the default-namespace for resources that have not been specified otherwise. Similarly, kubectl interacts with the default namespace as well. Yet, there are already different namespace in a basic K8s cluster
- default - The default namespace for objects with no other namespace
- kube-system - The namespace for objects created by the Kubernetes system
- kube-public - This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
- kube-node-lease - This namespace for the lease objects associated with each node which improves the performance of the node heartbeats as the cluster scales.
Of course, there is also the possibility of creating ones own namespace and using it by attaching a e.g. Deployment to it, such as seen in the following example.
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-deployment
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: monitoring-deployment
template:
metadata:
labels:
app: monitoring-deployment
spec:
containers:
- name: monitoring-deployment
image: "grafana/grafana:latest"
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
When creating a Service, a corresponding DNS entry is created as well, such as seen in the Services section when calling backendflask
directly. This entry is created according to the namespace which is denoted to the service. This can be useful when using the same configuration across multiple namespaces such as development, staging, and production. It is also possible to reach across namespaces. One needs to use the fully qualified domain name (FQDN) tough, such as <service-name>.<namespace-name>.svc.cluster.local
.
5.5.2 Labels, Selectors and Annotations
In the previous sections we already made use of labels, selectors, and annotations, e.g. when matching the ClusterIP service to the back-deployments. Labels are a key-value pair that can be attached to objects such as Pods, Deployments, Replicaset, Services, etc. Overall, they are used to organize and select objects.
Annotations are an unstructured key-value mapping stored with a resource that may be set by external tools to store and retrieve any metadata. In contrast to labels and selectors, annotations are not used for querying purposes but rather to attach arbitrary non-identifying metadata. These data are used to assist tools and libraries to work with the K8s ressource, for example to pass configuration around between systems, or to send values so external tools can perform more informed decisions based on the annotations provided.
Selectors are used to filter K8s objects based on a set of labels. A selector basically simply uses a boolean language to select pods. The selector matches the labels under a an all or nothing principle, meaning everything specified in the selector must be fulfilled by the labels. However, this works not the other way around. If there are multiple labels specified and the selector matches only one of them, the selector will match the ressource itself. How a selector matches the labels can be tested using the kubectl
commands as seen below.
# Show all pods including their labels
kubectl get pods --show-labels
# Show only pods that match the specified selector key-value pairs
kubectl get pods --selector="key=value"
kubectl get pods --selector="key=value,key2=value2"
# in short one can also write
kubectl get pods -l key=value
# or also look for multiple
kubectl get pods -l 'key in (value1, value2)'
When using ReplicaSets in a Deployment, their selector matches the labels to a specific pod (check e.g. the section describing Deployments). Any Pods matching the label of the selector will be created according to the specified replicas. Of course, there can also be multiple labels specified.
The same principle accounts when working with Services. Below example shows two different Pods and two NodePort services. Each service matches to a Pod based on their selector-label relationship. Have a look at their specific settings using kubectl
. The Nodeport Service labels-and-selectors-2 has no endpoints, as it is a all-or-none-principle and none of the created Pods matches the label environment=dev
. In contrast, even though the Pod cat-v1 has multiple labels specified app: cat-v1; version: one
, the NodePort Service labels-and-selectors is linked to it. It is also linked to the second Pod cat-v2.
# labels.yaml
apiVersion: v1
kind: Pod
metadata:
name: cat-v1
labels:
app: cat-v1
version: one
spec:
containers:
- name: cat-v1
image: "seblum/mlops-public:cat-v1"
resources:
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: Pod
metadata:
name: cat-v2
labels:
app: cat-v1
spec:
containers:
- name: cat-v2
image: "seblum/mlops-public:cat-v2"
resources:
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: labels-and-selectors
spec:
type: NodePort
selector:
app: cat-v1
ports:
- port: 80
targetPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: labels-and-selectors-2
spec:
type: NodePort
selector:
app: cat-v1
environment: dev
ports:
- port: 80
targetPort: 5000
5.5.3 ConfigMaps
When building software, the same container image should be used for development, testing, staging, and production stage. Thus, container images should be reusable. What usually changes are only the configuration settings of the application. ConfigMaps allow to store such configurations as a simple mapping of key-value pairs. Most of the time, the configuration within a config map is injected using environment variables and volumes. However, ConfigMaps should only be used to store configuration files, not sensitive data, as they do not secure them. Besides allow for an easy change of variables, another benefit of using ConfigMaps is that changes in the configuration are not disruptive, meaning the application can still run while the configuration changes without affecting the application. However, one needs to keep in mind that change made to ConfigMaps and environment variables will not be reflected on already and currently running containers.
The following example creates two different ConfigMaps. The first one includes three environment variables as data. The second one include a more complex configuration of an nginx server.
# configmaps.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-properties
data:
app-name: kitty
app-version: 1.0.0
team: engineering
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
# configuration in .conf
nginx.conf: |
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /health {
access_log off;
return 200 "healthy\n";
} }
Additionally, a Deployment is created which uses both ConfigMaps. A ConfigMap is declared under spec.volumes
as well. It is also possible to state a reference to both ConfigMaps simultaneously. The Deployment creates two containers. The first container mounts each ConfigMap as a Volume. Container two uses environment variables to access and configure the key-value pairs of the ConfigMaps and store them on the container.
# configmaps_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: config-map
spec:
selector:
matchLabels:
app: config-map
template:
metadata:
labels:
app: config-map
spec:
volumes:
# specify ConfigMap nginx-conf
- name: nginx-conf
configMap:
name: nginx-conf
# specify ConfigMap app-properties
- name: app-properties
configMap:
name: app-properties
# if both configmaps shall be mounted under one directory,
# we need to use projected
- name: config
projected:
sources:
- configMap:
name: nginx-conf
- configMap:
name: app-properties
containers:
- name: config-map-volume
image: busybox
volumeMounts:
- mountPath: /etc/cfmp/ngnix
# is defined here in the nginx-volume to mount
name: nginx-conf
# everything from that configMap is mounted as a file
# the file content is the value themselves
- mountPath: /etc/cfmp/properties
name: app-properties
- mountPath: etc/cfmp/config
name: config
command:
- "/bin/sh"
- "-c"
args:
- "sleep 3600"
resources:
limits:
memory: "128Mi"
cpu: "500m"
- name: config-map-env
image: busybox
resources:
limits:
memory: "128Mi"
cpu: "500m"
# as previously, keep the busybox container alive
command:
- "/bin/sh"
- "-c"
args:
- "env && sleep 3600"
env:
# environment variables to read in from config map
# for every data key-value pair in config Map, an own
# environment variable is created, which gets
# the value from the corresponding key
- name: APP_VERSION
valueFrom:
configMapKeyRef:
name: app-properties
key: app-version
- name: APP_NAME
valueFrom:
configMapKeyRef:
name: app-properties
key: app-name
- name: TEAM
valueFrom:
configMapKeyRef:
name: app-properties
key: team
# reads from second config map
- name: NGINX_CONF
valueFrom:
configMapKeyRef:
name: nginx-conf
key: nginx.conf
We can check for the attached configs by accessing the containers via the shell, similar to what we did in the section about Volumes. In the container config-map-volume, the configs are saved under the respective mountPath
of the volume. In the config-map-env, the configs are stored as environment variables.
# get in container -volume or -env
kubectl exec -it <config-map-name> -c >container-name< -- sh
# check subdirectories
ls
# print environment variables
printenv
5.5.4 Secrets
Secrets, as the name suggests, store and manage sensitive information. However, secrets are actually not secrets in K8s. They can quite easily decoded using kubectl describe
on a secret and decode it using the shell command echo <password> | base64 -d
. Thus, sensitive information like database password should never be stored in secrets. There are much better ressources to store such data, for example a Vault on the cloud provider itself. However, secret can be used so that you don’t need to include confidential data in your application code. Since they are stored and created independently of the Pods that use them, there is less risk of being exposed during the workflow of creating, viewing, and editing Pods.
It is possible to create secrets using imperative approach as shown below.
# create the two secrets db-password and api-token
kubectl create secret generic mysecret-from-cli --from-literal=db-password=123 --from-literal=api-token=token
# output the new secret as yaml
kubectl get secret mysecret -o yaml
# create a file called secret with a file-password in it
echo "super-save-password" > secret
# create a secret from file
kubectl create secret generic mysecret-from-file --from-file=secret
Similar to ConfigMaps, secrets are accessed via an environment variable or a volume.
# secrets.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: secrets
spec:
selector:
matchLabels:
app: secrets
template:
metadata:
labels:
app: secrets
spec:
volumes:
# get the secret from a volume
- name: secret-vol
secret:
# the name of the secret we created earlier
secretName: mysecret-from-cli
containers:
- name: secrets
image: busybox
volumeMounts:
- mountPath: /etc/secrets
name: secret-vol
env:
# nane of the secret in the container
- name: CUSTOM_SECRET
# get the secret from an environment variable
valueFrom:
secretKeyRef:
# name and key of the secret we created earlier
name: mysecret-from-file
key: secret
command:
- "sleep"
- "3600"
resources:
limits:
memory: "128Mi"
cpu: "500m"
5.5.4.1 Exemplary use case of secrets
When pulling from a private dockerhub repository, applying the deployment will throw an error since there are no username and password specified. As they should not be coded into the deployment yaml itself, they can be accessed via a secret. In fact, a specific secret can be specified for docker registry. The secret can be specified using the imperative approach.
kubectl create secret docker-registry docker-hub-private \
\
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_PASSWORD --docker-email=YOUR_EMAIL
Finally, the secret is specified in the deployment configuration where it can be accessed during application.
# secret_dockerhub.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: secret-app
spec:
selector:
matchLabels:
app: secret-app
template:
metadata:
labels:
app: secret-app
spec:
# specifiy the docker-registry secret to be accessed
imagePullSecrets:
- name: docker-hub-private
containers:
- name: secret-app
# of course you need an own private repository
# to pull and change the name accordingly
image: seblum/private:cat-v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80