DEV Community

Cover image for Why Kubernetes Was a Mistake for My SaaS Business (🤯)
Sotiris Kourouklis
Sotiris Kourouklis

Posted on • Updated on • Originally published at sotergreco.com

Why Kubernetes Was a Mistake for My SaaS Business (🤯)

As an independent developer myself, I reached a point in my career where I had to try Kubernetes. Up until now, when I created software that managed multiple instances for each customer and they needed a subdomain, I did all the manual labor and scale vertically.

Also, using the same database for everyone is not ideal at scale. Each instance is vulnerable because if one goes down, then everything goes down.

Imagine if linktr.ee went down, then all of its subdomains or handles would go down. I managed to fix that for my application by Dockerizing and adding Kubernetes to all of my services.

Before I dive deeper, let's look at my architecture. I had a Core API and Frontend for handling payments and subscriptions, an Admin Panel, and a Frontend and API for each unique customer.

Each customer had their own services in their own Pod on Kubernetes.

Infrastructure

Before starting to implement anything, I looked for K8S deployment solutions. At first, I looked at AWS and Google Cloud, but both providers were really expensive, although very reliable.

So, I decided on Vultr and VKE K8S Engine, which had reasonably cheaper pricing, but came with some downsides.

The most important issue, which I solved with a quick workaround, was that the default storage options did not support ReadWriteMany for Block and Object Storage, which were necessary for using my database Postgres.

To better understand this, when I created multiple replicas, the same database could not be accessed by all the replicas. Only one at a time could access the database.

Image description

I overcome this issue with Longhorn which is a native distributed block storage for Kubernetes and supports by default RWM and not only RWO (ReadWriteOnce).

Another options was to have multiple database replicas but I didn't want that for now because the data integrity was not so important on my project. Also I had Object Storage API to backup my database once per day.

Database Backup

How I did that was quite simple. On my instance's Laravel API, I installed a package ayles-software/laravel-mysql-s3-backup, which provided a command to back up your database on S3-like storage systems.

Here is the configuration:

// .env
AWS_API_KEY=7ZS5446GRW9T56XPHI2
AWS_API_SECRET=NFaJs89kNc4EuOFElkfgCjFJK43561LXl3TAzzC
AWS_S3_BUCKET=test
AWS_S3_REGION=ams1
AWS_ENDPOINT=https://ams1.vultrobjects.com/
BACKUP_FOLDER=store-core
Enter fullscreen mode Exit fullscreen mode
<?php
class Kernel extends ConsoleKernel
{
    /**
     * Here you define the cron job command
     */
    protected function schedule(Schedule $schedule): void
    {
        $schedule->command('db:backup')->daily()->at('01:00');
    }
}
Enter fullscreen mode Exit fullscreen mode

Kubernetes Configuration

K8S works with yaml configuration files. Because each store needs it's own yaml configuration I create it programmatically in one of my Laravel Services. Let's see how it looks in detail.

There are 2 types of requests coming. The first is if customer register and pay with stripe and the other is if for some reason you want to update a store configuration.

Image description

After the Core API validated the payment then the YAML K8S Generator Service created the yaml files. And runs these commands.

kubectl apply -f namespace.yml
kubectl apply -f sql-persistant-claim.yml
kubectl apply -f storage-persistant-claim.yml
kubectl apply -f service.yml
kubectl apply -f deployment.yml
kubectl apply -f configmap.yml
kubectl apply -f configmap-admin.yml
kubectl apply -f configmap-web.yml
kubectl apply -f configmap-nginx.yml
Enter fullscreen mode Exit fullscreen mode

We also connect with the Cloudflare API so a subdomain is created to point to a specific IP generated by the load balancer. We get this IP from the K8S API like this.

$response = Http::withOptions([
            'verify' => false,
            'cert' => $clientCertPath,
            'ssl_key' => $clientKeyPath,
        ])->withHeaders([
            'Content-Type' => 'application/json',
        ])->get("$apiEndpoint/api/v1/namespaces/" . env('APP_NAME') . "-$store->domain/services/" . env('APP_NAME') . "-service");

return json_decode($response->body(), true)['status']['loadBalancer']['ingress'][0]['ip'];
Enter fullscreen mode Exit fullscreen mode

Finally, we send the customer an email that their cluster is ready. Usually, it takes 2-3 minutes to create the cluster, and the email is sent 10 minutes after they have registered.

So the final system looks something like this.

Image description

Why Not To Do It

This is a very complex system indeed for an Indie Hacker. So before I talk about the benefits, I am going to discuss the cons and why you shouldn't do it.

Using Pods to manage different subdomains like Shopify does is needed under specific circumstances. One of them is having a large customer base and needing high availability for all of your customers. If you don't need high availability, this is pointless. In my case, I needed it, but I didn't have that many customers, so only in the long run it might not be worth it.

Another thing is that setting up this infrastructure is really complex and not worth the time. It took me around 10 days to set all of these systems up and I am a Senior Engineer with Years of experience at my back.

Last but not least is the cost. Running 2 clusters, one for production and one for staging, can be very costly. Instead, if you host everything on a very large VPS or Vercel, it might be worth it for your micro-SaaS.

Before going to the pros, I want to mention one last thing: the maintenance part. How I pass updates to all the stores when I merge a new feature is another article by itself. I have basically created a whole mechanism that connects GitHub Actions with my CORE API to regenerate YAML files and then deploy them.

Pros

The only benefit is that I learned not to do it again. There is no reason for an Indie Hacker to set up all this. Only if you have scaled to millions of sessions per months it is worth it. So to everybody out there you can try it to learn how Kubernetes work and how to work with Big Data but don't believe that is going to be worth it at the end.

I have to say that I learned a lot implementing these systems. It might not be worth it from a time standpoint, but from a learning standpoint, it is worth it. If you want to "waste" 50 hours and a few hundred dollars on server and CI/CD costs, you can do it.

Conclusion

In conclusion, while Kubernetes offers a robust solution for managing large-scale applications with high availability, it may not be the best fit for every SaaS business, especially for indie developers or smaller projects.

The complexity, cost, and maintenance overhead can outweigh the benefits unless you have a substantial customer base and specific high-availability needs.

However, the learning experience gained from implementing such a system is invaluable. Ultimately, it's crucial to evaluate your specific requirements and resources before diving into Kubernetes for your project.

Thanks for reading, and I hope you found this article helpful. If you have any questions, feel free to email me at kourouklis@pm.me, and I will respond.

You can also keep up with my latest updates by checking out my X here: x.com/sotergreco

Top comments (1)

Collapse
 
panasenco profile image
Aram Panasenco

To quote the philosopher Alfred Whitehead, "Civilization advances by extending the number of important operations which we can perform without thinking about them."

I had to start using Kubernetes at my job a few months back. The people who designed Kubernetes and the people who use it definitely don't seem to believe in Whitehead's principle above.

I'd look into AWS Copilot CLI. I haven't used it but it seems to fit the bill of just being able to deploy important stuff without having to think about it too much.