KodeKloud CKS challenge #2
Dockerfile
- 1. Run as non root(instead, use correct application user)
- 2. Avoid exposing unnecessary ports
- 3. Avoid copying the 'Dockerfile' and other unnecessary files and directories in to the image. Move the required files and directories (app.py, requirements.txt and the templates directory) to a subdirectory called 'app' under 'webapp' and update the COPY instruction in the 'Dockerfile' accordingly.
- 4. Once the security issues are fixed, rebuild this image locally with the tag 'kodekloud/webapp-color:stable'
cat Dockerfile
=================================================================
FROM python:3.6-alpine
## Install Flask
RUN pip install flask
## Copy All files to /opt
COPY . /opt/
## Flask app to be exposed on port 8080
EXPOSE 8080
## Flask app to be run as 'worker'
RUN adduser -D worker
## Expose port 22
EXPOSE 22
WORKDIR /opt
USER root
ENTRYPOINT ["python", "app.py"]
=================================================================
cd webapp
mkdir app
mv app.py requirements.txt templates/ app/
=================================================================
FROM python:3.6-alpine
ENV PATH=$PATH:/home/worker/.local/bin
COPY app/ /opt/
WORKDIR /opt
## Install Flask
RUN apk -U update \
&& python -m pip install --upgrade pip \
&& pip install -r requirements.txt \
&& adduser -D worker
## Flask app to be exposed on port 8080
EXPOSE 8080
USER worker
ENTRYPOINT ["python", "app.py"]
=================================================================
docker build -t kodekloud/webapp-color:stable .
==================================================================
==================================================================
- Ensure that the pod 'dev-webapp' is immutable:
- 1. This pod can be accessed using the 'kubectl exec' command. We want to make sure that this does not happen. Use a startupProbe to remove all shells before the container startup. Use 'initialDelaySeconds' and 'periodSeconds' of '5'. Hint: For this to work you would have to run the container as root!
- 2. Image used: 'kodekloud/webapp-color:stable'
- 3. Redeploy the pod as per the above recommendations and make sure that the application is up.
kubectl delete pod -n dev dev-webapp --force
kubectl delete pod -n staging staging-webapp --force
cd
vim dev-webapp.yaml
https://gist.github.com/tuxerrante/8d1568306e55a2fd67ad0707a266348c
https://gist.github.com/tuxerrante/43fe3b0c93453901630bc23d2ea13b1c
==================================================================
apiVersion: v1
kind: Pod
metadata:
labels:
name: dev-webapp
name: dev-webapp
namespace: dev
spec:
containers:
- env:
- name: APP_COLOR
value: darkblue
image: kodekloud/webapp-color:stable
imagePullPolicy: Never
name: webapp-color
resources: {}
startupProbe:
exec:
command:
- rm
- /bin/ash
- /bin/sh
securityContext:
runAsUser: 0
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-z4lvb
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-z4lvb
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
==================================================================
==================================================================
kubectl apply -f dev-webapp.yaml
kubectl get events -n dev --sort-by='.lastTimestamp'
k get pods -n dev
k logs -n dev dev-webapp
## Sometimes the container doesn't get pull by the dev pod. Removing and reinstalling it, works..
kubesec scan kodekloud/webapp-color:stable
[
{
"object": "Pod/dev-webapp.dev",
"valid": true,
"fileName": "dev-webapp.yaml",
"message": "Failed with a score of -34 points",
"score": -34,
"scoring": {
"critical": [
{
"id": "CapSysAdmin",
"selector": "containers[] .securityContext .capabilities .add == SYS_ADMIN",
"reason": "CAP_SYS_ADMIN is the most privileged capability and should always be avoided",
"points": -30
},
{
"id": "AllowPrivilegeEscalation",
"selector": "containers[] .securityContext .allowPrivilegeEscalation == true",
"reason": "",
"points": -7
}
…
==================================================================
==================================================================
cat staging-webapp.yaml
---
apiVersion: v1
kind: Pod
metadata:
labels:
name: staging-webapp
name: staging-webapp
namespace: staging
spec:
containers:
- env:
- name: APP_COLOR
value: pink
image: kodekloud/webapp-color:stable
imagePullPolicy: Never
name: webapp-color
resources: {}
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
capabilities:
add:
- NET_ADMIN
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-v78f2
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-v78f2
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
==================================================================
==================================================================
kubectl delete -n dev pod dev-webapp
kubectl apply -f dev-webapp.yaml
==================================================================
==================================================================
Use a network policy called 'prod-netpol' that will only allow traffic within the 'prod' namespace. All the traffic from other namespaces should be denied.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: prod-netpol
namespace: prod
spec:
podSelector:
matchLabels:
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
==================================================================
==================================================================
kubectl create secret -n prod generic prod-db \
--from-literal=db-user=root \
--from-literal=db-psw=paswrd \
–from-literal=DB_Host=prod-db
https://gist.github.com/tuxerrante/8dcc7c4867e8792a55a996c4c269ea17
spec:
containers:
- env:
- name: DB_Host
valueFrom:
secretKeyRef:
key: DB_Host
name: prod-db
- name: DB_User
valueFrom:
secretKeyRef:
key: db-user
name: prod-db
\- name: DB\_Password
valueFrom:
secretKeyRef:
key: db-psw
name: prod-db
==========================================================
➜ kubectl edit deployments.apps -n prod prod-web
deployment.apps/prod-web edited
➜ kubectl rollout -n prod restart deployment prod-web