Youtap Core Engine
Breadcrumbs

Microservices on Kubernetes

Now that we're familiar with the overall architecture of the sample application, how will we initially deploy this in to EKS? Let's explore some of the Kubernetes building blocks by looking at the catalog component:

Catalog microservice in Kubernetes

There are a number of things to consider in this diagram:

  • The application that provides the catalog API runs as a Pod, which is the smallest deployable unit in Kubernetes. Application Pods will run the container images we outlined in the previous section.

  • The Pods that run for the catalog component are created by a Deployment which may manage one or more "replicas" of the catalog Pod, allowing it to scale horizontally.

  • A Service is an abstract way to expose an application running as a set of Pods, and this allows our catalog API to be called by other components inside the Kubernetes cluster. Each Service is given its own DNS entry.

  • We're starting this workshop with a MySQL database that runs inside our Kubernetes cluster as a StatefulSet, which is designed to manage stateful workloads.

  • All of these Kubernetes constructs are grouped in their own dedicated catalog Namespace. Each of the application components has its own Namespace.

Each of the components in the microservices architecture is conceptually similar to the catalog, using Deployments to manage application workload Pods and Services to route traffic to those Pods. If we expand out our view of the architecture we can consider how traffic is routed throughout the broader system:

Microservices in Kubernetes

The ui component receives HTTP requests from, for example, a users browser. It then makes HTTP requests to other API components in the architecture to fulfill that request and returns a response to the user. Each of the downstream components may have their own data stores or other infrastructure. The Namespaces are a logical grouping of the resources for each microservice and also act a soft isolation boundary which can be used to effectively implement controls using Kubernetes RBAC and Network Policies.

Before we do anything lets inspect the current Namespaces in our EKS cluster:

~$kubectl get namespaces

NAME                            STATUS   AGE
default                         Active   1h
kube-node-lease                 Active   1h
kube-public                     Active   1h
kube-system                     Active   1h

All of the entries listed are Namespaces for system components that were pre-installed for us. We'll ignore these by using Kubernetes labels to filter the Namespaces down to only those we've created:

~$kubectl get namespaces -l app.kubernetes.io/created-by=eks-workshop

No resources found

The first thing we'll do is deploy the catalog component by itself. The manifests for this component can be found in ~/environment/eks-workshop/base-application/catalog.

~$ls ~/environment/eks-workshop/base-application/catalog

configMap.yaml
deployment.yaml
kustomization.yaml
namespace.yaml
secrets.yaml
service-mysql.yaml
service.yaml
serviceAccount.yaml
statefulset-mysql.yaml

These manifests include the Deployment for the catalog API:

~/environment/eks-workshop/base-application/catalog/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: catalog
  labels:
    app.kubernetes.io/created-by: eks-workshop
    app.kubernetes.io/type: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: catalog
      app.kubernetes.io/instance: catalog
      app.kubernetes.io/component: service
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/name: catalog
        app.kubernetes.io/instance: catalog
        app.kubernetes.io/component: service
        app.kubernetes.io/created-by: eks-workshop
    spec:
      serviceAccountName: catalog
      securityContext:
        fsGroup: 1000
      containers:
        - name: catalog
          env:
            - name: DB_USER
              valueFrom:
                secretKeyRef:
                  name: catalog-db
                  key: username
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: catalog-db
                  key: password
          envFrom:
            - configMapRef:
                name: catalog
          securityContext:
            capabilities:
              drop:
              - ALL
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000
          image: "public.ecr.aws/aws-containers/retail-store-sample-catalog:0.4.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 30
            periodSeconds: 3
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            successThreshold: 3
            periodSeconds: 5
          resources:
            limits:
              memory: 512Mi
            requests:
              cpu: 250m
              memory: 512Mi
          volumeMounts:
            - mountPath: /tmp
              name: tmp-volume
      volumes:
        - name: tmp-volume
          emptyDir:
            medium: Memory


This Deployment expresses the desired state of the catalog API component:

  • Use the public.ecr.aws/aws-containers/retail-store-sample-catalog container image

  • Run a single replica

  • Expose the container on port 8080 named http

  • Run probes/healthchecks against the /health path

  • Requests a specific amount of CPU and memory so the Kubernetes scheduler can place it on a node with enough available resources

  • Apply labels to the Pods so other resources can refer to them

The manifests also include the Service used by other components to access the catalog API:

~/environment/eks-workshop/base-application/catalog/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: catalog
  labels:
    app.kubernetes.io/created-by: eks-workshop
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: catalog
    app.kubernetes.io/instance: catalog
    app.kubernetes.io/component: service


This Service:

  • Selects catalog Pods using labels that match what we expressed in the Deployment above

  • Exposes itself on port 80

  • Targets the http port exposed by the Deployment, which translates to port 8080

Let's create the catalog component:

~$kubectl apply -k ~/environment/eks-workshop/base-application/catalog

namespace/catalog created
serviceaccount/catalog created
configmap/catalog created
secret/catalog-db created
service/catalog created
service/catalog-mysql created
deployment.apps/catalog created
statefulset.apps/catalog-mysql created

Now we'll see a new Namespace:

~$kubectl get namespaces -l app.kubernetes.io/created-by=eks-workshop

NAME      STATUS   AGE
catalog   Active   15s

We can take a look at the Pods running in this namespace:

~$kubectl get pod -n catalog

NAME                       READY   STATUS    RESTARTS      AGE
catalog-846479dcdd-fznf5   1/1     Running   2 (43s ago)   46s
catalog-mysql-0            1/1     Running   0             46s

Notice we have a Pod for our catalog API and another for the MySQL database. The catalog Pod is showing a status of CrashLoopBackOff. This is because it needs to be able to connect to the catalog-mysql Pod before it will start, and Kubernetes will keep restarting it until this is the case. Luckily we can use kubectl wait to monitor specific Pods until they are in a Ready state:

~$kubectl wait --for=condition=Ready pods --all -n catalog --timeout=180s


Now that the Pods are running we can check their logs, for example the catalog API:

TIP

You can "follow" the kubectl logs output by using the '-f' option with the command. (Use CTRL-C to stop following the output)

~$kubectl logs -n catalog deployment/catalog


Kubernetes also allows us to easily scale the number of catalog Pods horizontally:

~$kubectl scale -n catalog --replicas 3 deployment/catalog

deployment.apps/catalog scaled

~$kubectl wait --for=condition=Ready pods --all -n catalog --timeout=180s


The manifests we applied also create a Service for each of our application and MySQL Pods that can be used by other components in the cluster to connect:

~$kubectl get svc -n catalog

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
catalog         ClusterIP   172.20.83.84     <none>        80/TCP     2m48s
catalog-mysql   ClusterIP   172.20.181.252   <none>        3306/TCP   2m48s

These Services are internal to the cluster, so we cannot access them from the Internet or even the VPC. However, we can use exec to access an existing Pod in the EKS cluster to check the catalog API is working:

~$kubectl -n catalog exec -it \

deployment/catalog -- curl catalog.catalog.svc/catalogue | jq .


You should receive back a JSON payload with product information. Congratulations, you've just deployed your first microservice to Kubernetes with EKS!