What is Serverless in Kyma?

Serverless in Kyma is an area that:

  • Ensures quick deployments following a Function approach
  • Enables scaling independent of the core applications
  • Gives a possibility to revert changes without causing production system downtime
  • Supports the complete asynchronous programming model
  • Offers loose coupling of Event providers and consumers
  • Enables flexible application scalability and availability

Serverless in Kyma allows you to reduce the implementation and operation effort of an application to the absolute minimum. It provides a platform to run lightweight Functions in a cost-efficient and scalable way using JavaScript and Node.js. Serverless in Kyma relies on Kubernetes resources like Deployments, Services and HorizontalPodAutoscalers for deploying and managing Functions and Kubernetes Jobs for creating Docker images.

"Serverless" refers to an architecture in which the infrastructure of your applications is managed by cloud providers. Contrary to its name, a serverless application does require a server but it doesn't require you to run and manage it on your own. Instead, you subscribe to a given cloud provider, such as AWS, Azure, or GCP, and pay a subscription fee only for the resources you actually use. Because the resource allocation can be dynamic and depends on your current needs, the serverless model is particularly cost-effective when you want to implement a certain logic that is triggered on demand. Simply, you get your things done and don't pay for the infrastructure that stays idle.

Kyma offers a service (known as "functions-as-a-service" or "FaaS") that provides a platform on which you can build, run, and manage serverless applications in Kubernetes. These applications are called Functions and they are based on Function custom resource (CR) objects. They contain simple code snippets that implement a specific business logic. For example, you can define that you want to use a Function as a proxy that saves all incoming event details to an external database.

Such a Function can be:

  • Triggered by other workloads in the cluster (in-cluster events) or business events coming from external sources. You can subscribe to them using a Subscription CR.
  • Exposed to an external endpoint (HTTPS). With an APIRule CR, you can define who can reach the endpoint and what operations they can perform on it.

From code to Fucation

Pick the programming language for the Function and decide where you want to keep the source code. Serverless will create the workload out of it for you.

Runtimes

Functions support multiple languages by using the underlying execution environments known as runtimes. Currently, you can create both Node.js and Python Functions in Kyma.

TIP: See sample Functions for each available runtime.

Source code

You can also choose where you want to keep your Function's source code and dependencies. You can either place them directly in the Function CR under the spec.source and spec.deps fields as an inline Function, or store the code and dependencies in a public or private Git repository (Git Functions). Choosing the second option ensures your Function is versioned and gives you more development freedom in the choice of a project structure or an IDE.

TIP: Read more about Git Functions.

Container registries

By default, Serverless uses PersistentVolume (PV) as the internal registry to store Docker images for Functions. The default storage size of a single volume is 20 GB. This internal registry is suitable for local development.

If you use Serverless for production purposes, it is recommended that you use an external registry, such as Docker Hub, Google Container Registry (GCR), or Azure Container Registry (ACR).

Serverless supports two ways of connecting to an external registry:

TIP: For details, read about switching registries at runtime.

Development toolkit

To start developing your first Functions, you need:

  • Self-hosted Kubernetes cluster and the KUBECONFIG file to authenticate to the cluster
  • Kyma as the platform for managing the Function-related workloads
  • Docker as the container runtime
  • kubectl, the Kubernetes command-line tool, for running commands against clusters
  • Development environment of your choice:
    • Kyma CLI to easily initiate inline Functions or Git Functions locally, run, test, and later apply them on the clusters
    • Node.js (v14 or v16) or Python (v3.9)
    • IDE as the source code editor
    • Kyma Dashboard to manage Functions and related workloads through the graphical user interface

Security considerations

To eliminate potential security risks when using Functions, bear in mind these few facts:

  • Kyma provides base images for serverless runtimes. Those default runtimes are maintained with regards to commonly known security advisories. It is possible to use a custom runtime image (see this tutorial). In such a case, you are responsible for security compliance and assessment of exploitability of any potential vulnerabilities of the custom runtime image.

  • Kyma does not run any security scans against Functions and their images. Before you store any sensitive data in Functions, consider the potential risk of data leakage.

  • Kyma does not define any authorization policies that would restrict Functions' access to other resources within the Namespace. If you deploy a Function in a given Namespace, it can freely access all events and APIs of services within this Namespace.

  • Since Kubernetes is moving from PodSecurityPolicies to PodSecurity Admission Controller, Kyma Functions require running in Namespaces with the baseline Pod security level. The restricted level is not currently supported due to the requirements of the Function building process.

  • Kyma Serverless components can run with the PodSecurity Admission Controller support in the restricted Pod security level when using an external registry. When the Internal Docker Registry is enabled, the Internal Registry DaemonSet requires elevated privileges to function correctly, exceeding the limitations of both the restricted and baseline levels.

  • All administrators and regular users who have access to a specific Namespace in a cluster can also access:

    • Source code of all Functions within this Namespace
    • Internal Docker registry that contains Function images
    • Secrets allowing the build Job to pull and push images from and to the Docker registry (in non-system Namespaces)

Limitations

Controller limitations

Serverless controller does not serve time-critical requests from users. It reconciles Function custom resources (CR), stored at the Kubernetes API Server, and has no persistent state on its own.

Serverless controller doesn't build or serve Functions using its allocated runtime resources. It delegates this work to the dedicated Kubernetes workloads. It schedules (build-time) jobs to build the Function Docker image and (runtime) Pods to serve them once they are built. Refer to the architecture diagram for more details.

Having this in mind Serverless controller does not require horizontal scaling. It scales vertically up to the 160Mi of memory and 500m of CPU time.

Limitation for the number of Functions

There is no upper limit of Functions that can be run on Kyma (similar to Kubernetes workloads in general). Once a user defines a Function, its build jobs and runtime Pods will always be requested by Serverless controller. It's up to Kubernetes to schedule them based on the available memory and CPU time on the Kubernetes worker nodes. This is determined mainly by the number of the Kubernetes worker nodes (and the node auto-scaling capabilities) and their computational capacity.

Build phase limitation

The time necessary to build Function depends on:

  • selected build profile that determines the requested resources (and their limits) for the build phase
  • number and size of dependencies that must be downloaded and bundled into the Function image.
  • cluster nodes specification (see the note with reference specification at the end of the article)
  • Node.js
  • Python

The shortest build time (the limit) is approximately 15 seconds and requires no limitation of the build job resources and a minimum number of dependencies that are pulled in during the build phase.

Running multiple Function build jobs at once (especially with no limits) may drain the cluster resources. To mitigate such risk, there is an additional limit of 5 simultaneous Function builds. If a sixth one is scheduled, it is built once there is a vacancy in the build queue.

This limitation is configurable using containers.manager.envs.functionBuildMaxSimultaneousJobs.

Runtime phase limitations

In the runtime, the Functions serve user-provided logic wrapped in the WEB framework (express for Node.js and bottle for Python). Taking the user logic aside, those frameworks have limitations and depend on the selected runtime profile and the Kubernetes nodes specification (see the note with reference specification at the end of this article).

The following describes the response times of the selected runtime profiles for a "hello world" Function requested at 50 requests/second. This describes the overhead of the serving framework itself. Any user logic added on top of that will add extra milliseconds and must be profiled separately.

  • Node.js
  • Python

Obviously, the bigger the runtime profile, the more resources are available to serve the response quicker. Consider these limits of the serving layer as a baseline - as this does not take your Function logic into account.

Scaling

Function runtime Pods can be scaled horizontally from zero up to the limits of the available resources at the Kubernetes worker nodes. See the Use external scalers tutorial for more information.

In-cluster Docker registry limitations

Serverless comes with an in-cluster Docker registry for the Function images. This registry is only suitable for development because of its limitations, i.e.:

  • Registry capacity is limited to 20GB
  • There is no image lifecycle management. Once an image is stored in the registry, it stays there until it is manually removed.

NOTE: All measurements were done on Kubernetes with five AWS worker nodes of type m5.xlarge (four CPU 3.1 GHz x86_64 cores, 16 GiB memory).

Useful links

If you're interested in learning more about the Serverless area, follow these links to: