Skip to content

Kubernetes Cluster Management

BadgerPanel can deploy game servers to Kubernetes clusters as an alternative to traditional daemon-based nodes. Kubernetes deployments use an orchestrator component that runs inside the cluster and communicates with the panel over a WebSocket connection.

Architecture Overview

  • Orchestrator: A Go binary deployed as a pod in your Kubernetes cluster. It connects to the panel, receives deployment instructions, and manages game server pods, services, and persistent volume claims.
  • Cluster: A Kubernetes cluster registered in the panel via its orchestrator. One orchestrator manages one cluster.
  • Game Server Pod: Each game server runs as an individual pod with resource requests, limits, and a persistent volume for data.

Adding an Orchestrator

  1. Navigate to Admin > Orchestrators and click Add Orchestrator.
  2. Enter a name and description for the orchestrator.
  3. Click Create. The panel generates a one-line install script.
  4. Run the install script on a machine with kubectl access to your cluster:
bash
curl -sSL https://panel.example.com/api/orchestrators/install/TOKEN | bash

The script creates a namespace (badger-system), deploys the orchestrator as a Deployment with a ServiceAccount, ClusterRole, and ClusterRoleBinding, and configures the connection back to the panel.

  1. Once the orchestrator connects, a cluster record is created automatically. The cluster appears under Admin > Kubernetes.

Cluster Dashboard

The cluster detail page (Admin > Kubernetes > [cluster]) provides a multi-tab view:

  • Overview -- Cluster status, node count, pod count, CPU and memory utilization gauges, and orchestrator uptime.
  • Nodes -- Lists all Kubernetes nodes with their status, roles, resource capacity, and current usage.
  • Pods -- Lists pods in the game server namespace. Filterable by status (Running, Pending, Failed, etc.). Shows CPU and memory consumption per pod.
  • Deployments -- Shows active deployments and their replica status.
  • Events -- A real-time feed of Kubernetes events from the cluster, filterable by type (Normal, Warning) and namespace.

Namespace Management

By default, game servers are deployed to the badger-servers namespace. You can change this in the cluster settings. The panel tracks which namespaces are available and filters out system namespaces (kube-system, kube-public, kube-node-lease, badger-system) from user-facing views.

You can restrict the orchestrator to specific namespaces by configuring the Allowed Namespaces list in cluster settings. If left empty, the orchestrator can operate in any namespace.

Resource Allocation

Cluster-level resource settings are configured on the cluster detail page under Settings:

  • Storage Class -- The Kubernetes StorageClass used for game server PVCs.
  • Default Storage Size -- Default PVC size for new servers (e.g., 10Gi).
  • NodePort Range -- The port range for NodePort services (default: 30000-32767).
  • Memory Overhead Percent -- Extra memory allocated to pods beyond the server's configured limit, to account for JVM or runtime overhead (default: 25%).
  • Host Network -- When enabled, game server pods use the host network namespace, bypassing the cluster CNI. This provides better network performance for game servers.
  • Connection Address -- A static IP or hostname shown to end users for connecting to their servers.
  • Service Type -- Either NodePort or LoadBalancer, depending on your cluster's networking setup.

System Namespace Filtering

The panel automatically excludes Kubernetes system namespaces from game server deployment targets. The following namespaces are filtered: kube-system, kube-public, kube-node-lease, default, and badger-system. This prevents accidental deployment of game servers into infrastructure namespaces.

Auto-Migration

When the auto-migration feature is enabled, the orchestrator can automatically move game server pods between Kubernetes nodes based on resource pressure. If a node becomes overloaded, pods are rescheduled to nodes with available capacity. Configure auto-migration settings from the cluster detail page under Settings > Auto-Migration.

Removing a Cluster

To remove a Kubernetes cluster, first delete or transfer all servers deployed to it. Then navigate to the orchestrator detail page and click Delete. This removes the cluster and orchestrator records from the panel. You should also delete the orchestrator deployment from your cluster:

bash
kubectl delete namespace badger-system

BadgerPanel Documentation