Scaling on Kubernetes With Open Source Korifi

Korifi is an open source platform that makes deploying and managing applications on Kubernetes easy. It provides developers with a Cloud Foundry-like experience while offering the flexibility and power of Kubernetes.

One of the key benefits of Korifi is its ability to scale applications quickly. There are two main ways to scale an application in Korifi: Horizontal scaling and vertical scaling.

Horizontal Scaling

Horizontal scaling is the process of adding more instances of an application to handle increased traffic. To horizontally scale an application in Korifi, you can use the cf scale command along with the -i flag.

For example, to scale your application to 10 instances, you would run the following command:

cf scale my-app -i 10

Korifi will automatically create the necessary Kubernetes resources and deploy the new instances of your application.

Vertical Scaling

Vertical scaling is the process of increasing the resources (such as memory and CPU) allocated to an existing instance of an application. To vertically scale an application in Korifi, you can use the cf scale command with the -m flag.

For example, to scale your application to 1GB of memory, you would run the following command:

cf scale my-app -m 1G

You can make a similar change with the -k flag to increase disk space. Note that when vertically scaling an application’s disk space or memory limit, the change is applied to all instances of the app.

Example: Scale a Ruby Application With Korifi

Let’s put this into practice. For this example, I will artificially get a Ruby app to be memory-bound by sending too much traffic. To solve that, we will horizontally scale our app to help it better handle the traffic. This example is demonstrated on Ubuntu 22.04.2 LTS. If you need help with installing Korifi, check out this tutorial.

To make things simple, we will use a sample Ruby app that you can get by cloning the following repository:

git clone https://github.com/sylvainkalache/sample-web-apps

Now, we will modify our Ruby app to simulate a memory-heavy operation by creating a large array. Edit the app.rb file so that it looks like this

require 'sinatra'

get '/' do

Array.new(100000)

end

While Korifi’s default memory allocation per application instance is 1Gb, we will lower it to 50Mb to quickly run out of memory. We can do this by creating a Korifi manifest specifying how much memory an instance can use by running the following commands:

cd ruby/

cat << EOF > manifest.yaml

---

applications:

- memory: 50M

EOF

Now, let’s push our Ruby application

cf push my-ruby-app

Once the application is deployed, we can see that the application has only one instance launched with 50Mb of allocated memory.

$ cf app my-ruby-app

Showing health and status for app my-ruby-app in org tutorial-org / space tutorial-space as cf-admin...

name: my-ruby-app

requested state: started

routes: my-ruby-app.apps-127-0-0-1.nip.io

last uploaded: Fri 15 Sep 17:29:41 UTC 2023

stack: io.buildpacks.stacks.jammy

buildpacks:

type: web

sidecars:

instances: 1/1

memory usage: 50M

     state since cpu memory disk logging details

#0 running 2023-09-15T17:32:29Z 0.0% 45.3M of 50M 0 of 1G 0/s of 0/s

Then, let’s send a massive number of requests to the app using Apache Bench (ab); you can install it with the following command:

apt-get install apache2-utils

We will send 150 requests using ab at a concurrency rate of 50.

$ ab -n 150 -c 50 https://my-ruby-app.apps-127-0-0-1.nip.io/

[...]

Connection Times (ms)

              min mean[+/-sd] median max

Connect: 4 52 48.4 31 152

Processing: 43 4818 4014.0 3631 13103

Waiting: 41 4816 4016.4 3631 13102

Total: 150 4870 3991.6 3675 13113

We can see that the mean time for the app to process a request was 4.8 seconds, which is relatively slow.

Let’s solve that by vertically scaling our app by adding four more instances, for a total of five instances, using the cf scale command.

$ cf scale my-ruby-app -i 5

Scaling app my-ruby-app in org tutorial-org / space tutorial-space as cf-admin...

Instances starting...

Showing current scale of app my-ruby-app in org tutorial-org / space tutorial-space as cf-admin...

name: my-ruby-app

requested state: started

routes: my-ruby-app.apps-127-0-0-1.nip.io

last uploaded: Fri 15 Sep 17:29:41 UTC 2023

stack: io.buildpacks.stacks.jammy

buildpacks:

type: web

sidecars:

instances: 1/5

memory usage: 50M

     state since cpu memory disk logging details

#0 running 2023-09-15T17:38:53Z 10.2% 47.5M of 50M 0 of 1G 0/s of 0/s

#1 starting 2023-09-15T17:38:53Z 0.0% 0 of 0 0 of 0 0/s of 0/s

#2 starting 2023-09-15T17:38:53Z 0.0% 0 of 0 0 of 0 0/s of 0/s

#3 starting 2023-09-15T17:38:53Z 0.0% 0 of 0 0 of 0 0/s of 0/s

#4 starting 2023-09-15T17:38:53Z 0.0% 0 of 0 0 of 0 0/s of 0/s

Now that our cluster has five instances, it should be able to handle the traffic; let’s confirm by benchmarking again using ab.

$ ab -n 150 -c 50 https://my-ruby-app.apps-127-0-0-1.nip.io/

[...]

Connection Times (ms)

              min mean[+/-sd] median max

Connect: 7 145 66.7 150 324

Processing: 18 537 373.7 722 1707

Waiting: 18 527 379.2 722 1698

Total: 81 681 383.8 810 1813

We can see that we are able to lower the mean time to process a request from 4.8 seconds to 0.6 seconds.

Conclusion

While this was a fairly simple example, it illustrates how Korifi makes it super easy to scale an application on Kubernetes without dealing with the underlying complexity. Also, I created this video that shows what I’ve described in this article if you are interested to see how it works.

Sylvain Kalache

Sylvain Kalache is a tech entrepreneur. Formerly founder of a Software Engineering school training talent hired by companies like Google, NASA, Tesla and Apple. He started his career as an SRE, working for tech startups and large companies like LinkedIn.

Sylvain Kalache has 9 posts and counting. See all posts by Sylvain Kalache