The no-nonsense guide for Engineers

Evaluating any new technology takes quite a lot of time when you need to sift through all of the marketing  claims, collect grains of facts, and piece together an understanding of what is under the hood.

A man standing in front of a green and orange background.

I will save you time by sharing brutal facts directly from the factory floor.

We built CAST AI to help developers deploy, manage, and cost-optimize their Kubernetes clusters. Our mission is to end cloud provider lock-in and help developers go multi-cloud.

CAST AI can be used in a mono-cloud scenario. We  also  built it for those bold dev teams  that  want to gain ultimate control and flexibility in how they run their applications on many different cloud platforms at the same time.

We take advantage of the fact that there are some places around the world where Cloud Service Providers (CSP) are located so close geographically to one another that we can treat different cloud vendors as Availability Zones (being in the 5ms latency zone). For example:

  • Frankfurt (EU Central)
  • Ashburn (US East)
  • Lost Angeles (US West) etc

Check this out: Learn how to find the multi-cloud Goldilocks Zone with CAST AI

How does CAST AI work?

To get started, you need to have an account already set up on one or more of three major cloud services that we support at the moment (more to come in the future!): Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).

Once you have an account, you will need to create “ServiceAccounts” on each of the clouds you use. That’s how you allow CAST AI to create, orchestrate, and maintain Kubernetes clusters on your behalf.

CAST AI provides a managed Kubernetes services like EKS / AKS / GKE but still ensures a vanilla Kubernetes experience that you’re familiar with. You will get a Kubernetes cluster with Kubernetes control plane distributed on many clouds (master nodes) and worker nodes that are dynamically managed by an auto-scaler designed to be aware of multi-cloud complexities and cost concerns.

A Kubernetes cluster is created on a regular VMs in a new VPC. It is CAST AI responsibility to configure / patch / upgrade VMs and K8s components, but you have full access to these VMs to see what’s under the hood. 

CAST AI clusters do not use the more traditional node pool concept, which only allows scaling your cluster in one dimension, and requires you to choose node size upfront. Instead, node count&size is chosen automatically by our autoscaling engine and it’s behavior is governed  by  user-defined Policies. The auto-scaler will choose the most cost-effective VM types among different clouds based on your application’s actual compute demand. The nodes are picked by our Multi-Cloud Compare tool that brings together performance data and costs.

CAST AI orchestrates an IPsec VPN between the selected CSP networks (VPCs) and adds Cilium CNI with VXLAN tunnels to obfuscate the network for Kubernetes applications. You get a flat network on 2 or 3 clouds and every POD no matter where it currently runs can reach any other POD inside Kubernetes or any other CSP native services, for instance, PubSub, SageMaker or an SQL database service.

If your application is quite data-intensive network wise (25TB+ traveling between “Availability Zones” per month), there is an option to switch from VPN over internet to dedicated Direct/Express Interconnect between Clouds, further reducing Total Cost of Ownership through lowered Egress costs and latency.

Our platform offers a few lifesaver features for making the multi-cloud journey easier:

  1. We supply our own block storage driver (CSI) to allow provisioning persistent volumes in a multi cloud cluster, you will get Block volume attached from any CSP where your pod lands. You can resize PVCs, and it will expand automatically online. No performance overhead, just default StorageClass to handle native EBS volume, Azure Disk, and Google Cloud Block Disk. 
  2. Out of the box ready to expose application to internet:
    1. Pre-installed Cast.ai Ingress with the potential of 3 LoadBalancers pre-created (one on every cloud that your cluster spans).
    2. Global DNS Load-Balancer resource created for Active/Active multi-cloud application exposure. We provide you a CNAME entry that hides all of the complexity.
    3. Let’s Encrypt built-in certificate provisioning and certificate rotation

All of this without any new CRDs (Custom Resource Definitions);  you get a plain k8s vanilla experience.

To see how it works, deploy this app: https://github.com/CastAI/examples – Connect to preview

Opinionated and production-ready

CAST AI clusters come with a few predetermined configuration choices. We designed the networking configuration to allow for maximum observability while maintaining excellent network performance. 

Built on Ubuntu 20.04 LTS (5.4 kernel), that enables the use of eBPF technology powered by Cilium. You also won’t find the Kube-Proxy K8s component inside the cluster – no IPTABLES, no scaling issues.

We made some clear choices about logging, metrics, and health checks, based on many combined years of Kubernetes expertise. CAST AI includes auto-healing, so if something breaks or some component is accidentally deleted, there’s no need to worry – our service will bring your k8s cluster back to a healthy state.

Your application data or event logs will never leave your cluster. 

This is how development teams get to spend less time on getting their cluster production-ready and more time on mission-critical tasks such as  writing more code.

Built-in observability

When building a cluster with CAST AI, you will see that it automatically installs Prometheus for Metrics collection, ElasticSearch, and Filebeat for log collection and indexing. It also provides single sign-on to observability frontends such as Kibana for logs and Grafana for metrics, Kubernetes Dashboard for quick glance at your cluster state and what is running inside. Moreover, we preinstall Hubble UI based on Cilium for network and service connectivity visibility. 

kubernetes cluster dashboard

We’ve added all of these things because they form the core basics for running a production environment. End-to-end visibility is critical for troubleshooting issues successfully. CAST AI offers a set of options to make it all easier. 

Developers can freely remove or substitute these components (except prometheus), but it’s great to have a headstart on these elements for running a Kubernetes cluster smoothly.

You can leave anytime

If you’d like to stop using CAST AI, no problem – you can leave anytime. Your cluster will stay operational even if you use our multi-cloud features.

Try CAST AI for free until the official launch

We want to build the best multi-cloud Kubernetes solution, but we can’t do that without your help.

Join our developer community on Slack – we bring together cloud experts, software developers, and our lead engineers to brainstorm and share their thoughts about Kubernetes and multi-cloud.

If you’d like to try CAST AI, here’s some good news: we are offering access to our platform for free, up to 20 CPU’s.

Grab your free spot now and see how CAST AI works for yourself. 

{{cta(’51a429e5-c069-419c-8aff-81cd7161c294′)}}

  • Blog
  • The no-nonsense guide for Engineers
Subscribe

Leave a reply

Notify of
0 Comments
Inline Feedbacks
View all comments

Recent posts