Kubernetes and 3 Reasons Why CoreBapp Choose It

Everybody in the software industry agrees that Kubernetes is hard to manage. Embarking in this journey will take money, resources, failures and above all, tons of stress. The alternatives are not equivalent – it’s all about scalability, flexibility, automation, cost and resilience.

Kubernetes (k8s) is an open-source system meant to help manage containers in a distributed environment. All important tasks should be achieved with it - from app lifecycle, service health, and world-renounced auto-scaling. The maker of this sublime software engineering was, of course, Google. According to credible sources, Alphabet was running all its services (Mail, Search, Maps, etc.) since 2006 on containers. As a result, they ended up building two different, in-house, container-management systems with Borg and Omega before the rise of Kubernetes.

Kubernetes helps with our application-oriented infrastructure

Besides higher levels of utilization, containerization transforms the data center into an application-oriented infrastructure by encapsulating the application environment making life easier for our dear devs and DevOps.

Improvement of application deployment and introspection was made by the early shift towards the management of application rather than machines via management APIs. Moreover, to this subject, Docker container image format hardens this abstraction by eliminating some implicit OS dependencies or user input to share image data between containers.

If we see containers as the ‘unit of management’ we have to think of resource limits, metadata for propagation, logging and monitoring, and auto-healing capabilities. This approach proved to have multiple benefits like relieving devs and DevOps from computing specifications, ease of scalability for new hardware and OS upgrades and application-related telemetry (CPU, memory, disk).

Capacity Planning and Cost Forecasting

Because costs are pivotal to our survival at this early stage, one of the most important evaluation criteria in choosing a container-management system was the capability of real-time resources monitoring with the capability of assigning a near-real-time cost to it.

Kubernetes does a good job in managing and reporting used/available clusters resources, however, we needed a solution that gave us the overview of our capacity maximums vs cost. After some research and questions among our DevOps friends, we decided to give it a try with Kubernetes Opex Analytics. The tool is licensed under Apache 2.0 and seems to have a qualitative, yet young, standing community.

Kubernetes Opex Analytics - Daily Cumulative Memory and CPU usage per Namespace
Kubernetes Opex Analytics

We are happy to support this project, even if it’s a small contribution at this stage. We’ll dedicate a follow-up article for it.

Autoscaling Complexity

In an application-oriented infrastructure, scaling is key to performance towards your users. According to our initial tests, Kubernetes manages to lead this category as well – not necessarily in terms of speed.

K8s native horizontal scaling logic is, of course, simple in essence: it’s built on a continuous sensor (see it as a permanent, configurable heartbeat) against a certain threshold.

The algorithm complexity however, is a different discussion.

Our advice towards people just starting up with Kubernetes is targeted in regards to autoscaling setup: although simple in concept, it proves to be complex in execution and practice (which is a good thing, since we are control freaks). Nevertheless, before starting to dig into K8s autoscaling, a best practice advice would be to inform upfront about to the following:

  • kubectl command line is installed (kubectl CLI)
  • configure the k8s aggregator level
  • autoscaling based on custom metrics (other than default CPU usage)
  • Prometheus Monitoring Tool
  • TLS encryption, using a reverse proxy
  • instrument everything, to the best of your knowledge and capabilities
  • build and test scale down rules during CI/CD

Continuous Deployment Strategies

Application delivery is yet another praised reason that convinced us to adopt Kubernetes for our infrastructure.

YAML for K8S gives you awesome advantages like convenience and flexibility while maintenance becomes a simple task (if you add it to source control). We intend to play a lot with maps and lists (or combinations of maps and lists), in order to achieve complex things with it.

Speed is of the essence when it comes to app deployment in order to reduce our time-to-market, allow our customers to experience the newest features as fast as possible and obtain the rawest possible feedback.

We can look into the vast variety of application deployment techniques, but we strongly believe that this subject is already widely covered by most of the K8s community. However, we do think that we should point out a few important criteria that you need to take into consideration for your success:

  • choose the right app deployment technique as is crucial to your execution, take your time
  • do weight your impact over app and user experience and try assigning a score to each test
  • research methods of deployment with the lowest shutdown and boot duration at the expense of a slow roll-out time
  • although complex strategies can bring more control, make sure the user test is close to the real app experience
  • some strategies can prove to be costly – double check it fits your needs

Conclusion

In our journey towards offering the next-gen no code software development platform, we have proudly joined the Kubernetes family and we hope we will overcome the burden of complexity ASAP.

We have fears about K8s security – we’ll do our best to compensate with our own logic.

We like the fact that there’s an abundance of open-source tools (both young or heavily tested) for any task that K8s makes hard. We like the fact that free information is available to everyone.

CP