Architecture Overview¶
This guide explains the Bindy architecture, focusing on the dual-cluster model that enables multi-tenancy and flexible deployment patterns.
Table of Contents¶
- Architecture Principles
- Cluster Models
- Resource Hierarchy
- Reconciliation Flow
- Multi-Tenancy Model
- Namespace Isolation
Architecture Principles¶
Bindy follows Kubernetes operator pattern best practices:
- Declarative Configuration: Users declare desired state via CRDs, operators reconcile to match
- Level-Based Reconciliation: Operators continuously ensure actual state matches desired state
- Status Subresources: All CRDs expose status for observability
- Finalizers: Proper cleanup of dependent resources before deletion
- Generation Tracking: Reconcile only when spec changes (using
metadata.generation)
Cluster Models¶
Bindy provides two cluster models to support different organizational patterns:
Namespace-Scoped Clusters (Bind9Cluster)¶
Use Case: Development teams manage their own DNS infrastructure within their namespace.
Characteristics:
- Isolated to a single namespace
- Teams manage their own DNS independently
- RBAC scoped to namespace (Role/RoleBinding)
- Cannot be referenced from other namespaces
YAML Example:
apiVersion: bindy.firestoned.io/v1beta1
kind: Bind9Cluster
metadata:
name: dev-team-dns
namespace: dev-team-alpha
spec:
version: "9.18"
primary:
replicas: 1
secondary:
replicas: 1
Cluster-Scoped Clusters (ClusterBind9Provider)¶
Use Case: Platform teams provide shared DNS infrastructure accessible from all namespaces.
Characteristics:
- Cluster-wide visibility (no namespace)
- Platform team manages centralized DNS
- RBAC requires ClusterRole/ClusterRoleBinding
- DNSZones in any namespace can reference it
YAML Example:
apiVersion: bindy.firestoned.io/v1beta1
kind: ClusterBind9Provider
metadata:
name: shared-production-dns
# No namespace - cluster-scoped resource
spec:
version: "9.18"
primary:
replicas: 3
service:
type: LoadBalancer
secondary:
replicas: 2
Resource Hierarchy¶
The complete resource hierarchy shows how components relate:
Relationship Legend:
- Solid arrow (→) : Direct reference by name
- Dashed arrow (-.->): Cluster-scoped reference
- Bold arrow (==>) : Label selector-based selection
Key Relationships¶
- DNSZone → Cluster References (Optional):
spec.clusterRef: References namespace-scopedBind9Cluster(same namespace)spec.clusterProviderRef: References cluster-scopedClusterBind9Provider-
Note: These are optional - zones can select instances directly via label selectors
-
DNSZone → Bind9Instance Selection (Primary):
spec.bind9InstancesFrom: Label selectors to select instances- Direction: Zones select instances (NOT instances selecting zones)
- Selection Methods:
- Via
clusterRef: All instances with matchingspec.clusterRef - Via
bind9InstancesFrom: Instances matching label selectors - Via BOTH: UNION of instances from both methods (duplicates removed)
- Via
- Status Tracking:
status.bind9Instances[]lists selected instances -
Count Field:
status.bind9InstancesCountshows number of instances -
Bind9Instance → Cluster Reference:
spec.cluster_ref: Can reference eitherBind9ClusterorClusterBind9Provider- Operator auto-detects cluster type
-
Used for instance organization and management
-
DNSZone → DNS Records Association:
spec.recordsFrom: Label selectors to select records- Records use
metadata.labelsto be selected by zones - Namespace Isolation: Records can ONLY be selected by zones in their own namespace
- Status Tracking:
status.selectedRecords[]lists matched records - Count Field:
status.recordCountshows number of records
Reconciliation Flow¶
DNSZone Reconciliation¶
Architecture: Zones select instances via
spec.bind9InstancesFromlabel selectors orspec.clusterRefreferences.
Key Architectural Points:
- Event-Driven: Operator reacts to zone changes via Kubernetes watch events
- Instance Selection: Zones select instances (not instances selecting zones)
- Batched Status Updates: All status changes collected in
DNSZoneStatusUpdater, applied atomically - DIFF Detection: Status only patched if values actually changed (prevents reconciliation storms)
- Automatic Count Computation:
bind9InstancesCountcomputed automatically when instances added/removed - UID-Based Deduplication: Instances matched by UID prevents duplicates when both selection methods used
ClusterBind9Provider Reconciliation¶
Multi-Tenancy Model¶
Bindy supports multi-tenancy through two organizational patterns:
Platform Team Pattern¶
Platform teams manage cluster-wide DNS infrastructure:
RBAC Setup:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: platform-dns-admin
rules:
- apiGroups: ["bindy.firestoned.io"]
resources: ["clusterbind9providers"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: platform-team-dns
subjects:
- kind: Group
name: platform-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: platform-dns-admin
apiGroup: rbac.authorization.k8s.io
Development Team Pattern¶
Development teams manage namespace-scoped DNS:
RBAC Setup:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dns-admin
namespace: dev-team-alpha
rules:
- apiGroups: ["bindy.firestoned.io"]
resources: ["bind9clusters", "dnszones", "arecords", "cnamerecords", "mxrecords", "txtrecords"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-team-dns
namespace: dev-team-alpha
subjects:
- kind: Group
name: dev-team-alpha
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: dns-admin
apiGroup: rbac.authorization.k8s.io
Namespace Isolation¶
Security Principle: DNSZones and records are always namespace-scoped, even when referencing cluster-scoped resources.
Isolation Rules:
- Records can ONLY reference zones in their own namespace
- Operator uses
Api::namespaced()to enforce this -
Cross-namespace references are impossible
-
DNSZones are namespace-scoped
- Even when referencing
ClusterBind9Provider -
Each team manages their own zones
-
RBAC controls zone management
- Platform team: ClusterRole for
ClusterBind9Provider - Dev teams: Role for
DNSZoneand records in their namespace
Example - Record Isolation:
# team-a namespace - DNSZone with selector
apiVersion: bindy.firestoned.io/v1beta1
kind: DNSZone
metadata:
name: team-a-zone
namespace: team-a
spec:
zoneName: team-a.com
clusterRef: production-dns
recordsFrom:
- selector:
matchLabels:
zone: team-a.com
soaRecord:
primaryNs: ns1.team-a.com.
adminEmail: admin.team-a.com.
serial: 2024010101
---
# team-a namespace - Record with matching label
apiVersion: bindy.firestoned.io/v1beta1
kind: ARecord
metadata:
name: www
namespace: team-a
labels:
zone: team-a.com # ✅ Matches DNSZone selector in same namespace
spec:
name: www
ipv4Address: "192.0.2.1"
---
# This would NOT be selected - namespace isolation prevents cross-namespace selection
apiVersion: bindy.firestoned.io/v1beta1
kind: ARecord
metadata:
name: www-isolated
namespace: team-a
labels:
zone: team-b.com # ❌ No DNSZone in team-a with this selector
spec:
name: www
ipv4Address: "192.0.2.1"
Decision Tree: Choosing a Cluster Model¶
Use this decision tree to determine which cluster model fits your use case:
Deployment Architecture¶
Operator Deployment¶
The Bindy operator uses a centralized operator pattern - a single operator instance manages all BIND9 DNS infrastructure across the cluster.
Key Deployment Characteristics:
- Single Operator Instance: One operator manages all DNS infrastructure
- Namespace: Typically deployed in
dns-system(configurable) - Service Account:
bindy-operatorwith cluster-wide RBAC permissions - Event-Driven: Uses Kubernetes watch API (not polling) for efficient resource monitoring
- Zone Transfer: Leverages native BIND9 AXFR/IXFR for primary-secondary replication
- No Sidecars: Operator directly manages BIND9 configuration files via HTTP API
Deployment Components¶
1. CRDs (Custom Resource Definitions)
- Installed cluster-wide before operator deployment
- Define schema for Bind9Cluster, ClusterBind9Provider, DNSZone, record types
- See CRD Installation
2. RBAC (Role-Based Access Control) - ClusterRole: Grants operator permissions to manage CRDs across all namespaces - ServiceAccount: Identity for operator pod - ClusterRoleBinding: Links ServiceAccount to ClusterRole - See RBAC Configuration for details
3. Operator Deployment
- Image: ghcr.io/firestoned/bindy:latest
- Replicas: 1 (high availability not yet supported - leader election planned)
- Resources: 128Mi memory request, 512Mi limit; 100m CPU request, 500m limit
- Health Checks: Liveness and readiness probes for pod health monitoring
4. Managed Resources - Secrets: RNDC keys for secure BIND9 communication (auto-generated per instance) - ConfigMaps: BIND9 zone files and configuration (generated from DNSZone specs) - Deployments: BIND9 server pods (created from Bind9Instance specs) - Services: DNS endpoints (TCP/UDP port 53, LoadBalancer/NodePort/ClusterIP)
Deployment Workflow¶
When you install Bindy, resources are deployed in this order:
Step-by-Step:
- Deploy CRDs: Install Custom Resource Definitions (one-time setup)
- Deploy RBAC: Create ServiceAccount, ClusterRole, ClusterRoleBinding
- Deploy Operator: Start Bindy operator pod in
dns-systemnamespace - Operator Ready: Operator starts watching for Bind9Cluster and DNSZone resources
- Create Bind9Cluster: User creates cluster definition (namespace or cluster-scoped)
- Create Bind9Instances: Operator creates BIND9 Deployment/Service/ConfigMap resources
- Create DNSZones: User creates zone definitions referencing cluster
- Create DNS Records: User creates A, CNAME, MX, TXT records
- BIND9 Serving DNS: BIND9 pods respond to DNS queries on port 53
Zone Synchronization Architecture¶
Bindy uses native BIND9 zone transfer for high availability:
Key Points:
- Operator Updates Primary Only: Operator writes zone files to primary BIND9 instances
- BIND9 Handles Replication: Native AXFR/IXFR zone transfer to secondaries
- TSIG Authentication: Zone transfers secured with HMAC-SHA256 keys
- Automatic NOTIFY: Primaries notify secondaries of zone changes
- Incremental Transfers: IXFR transfers only changed records (efficient)
High Availability Considerations¶
BIND9 Instance HA:
- ✅ Multiple Primaries: Deploy 2-3 primary instances with
replicas: 2-3 - ✅ Multiple Secondaries: Deploy 2-3 secondary instances in different failure domains
- ✅ Zone Transfers: Secondaries sync from all primaries automatically
- ✅ LoadBalancer Service: Distribute DNS queries across all instances
Operator HA:
- ⚠️ Single Replica Only: Operator currently runs as single instance
- 📋 Leader Election: Planned for future release (multi-replica support)
- ✅ Stateless Design: Operator crashes are safe - all state in Kubernetes etcd
For BIND9 high availability setup, see High Availability Guide.
Resource Requirements¶
Operator Pod:
BIND9 Pods (default):
Adjust based on: - Number of zones managed - Query volume (QPS) - Zone transfer frequency - DNSSEC signing (if enabled)
Next Steps¶
- Quick Start - Get started with Bindy in 5 minutes
- Step-by-Step Guide - Detailed installation for both cluster types
- Multi-Tenancy Guide - Detailed RBAC setup and examples
- Choosing a Cluster Type - Decision guide for cluster selection
- High Availability - Production-ready HA configuration