Common Issues
Solutions to frequently encountered problems.
Bind9Instance Issues
Pods Not Starting
Symptom: Bind9Instance created but pods not running
Diagnosis:
kubectl get pods -n dns-system -l instance=primary-dns
kubectl describe pod -n dns-system <pod-name>
Common Causes:
- Image pull errors - Check image name and registry access
- Resource limits - Insufficient CPU/memory on nodes
- RBAC issues - ServiceAccount lacks permissions
Solution:
# Check events
kubectl get events -n dns-system
# Fix resource limits
kubectl edit bind9instance primary-dns -n dns-system
# Increase resources.requests and resources.limits
# Verify RBAC
kubectl auth can-i create deployments \
--as=system:serviceaccount:dns-system:bindy
ConfigMap Not Created
Symptom: ConfigMap missing for Bind9Instance
Diagnosis:
kubectl get configmap -n dns-system
kubectl logs -n dns-system deployment/bindy | grep ConfigMap
Solution:
# Check controller logs for errors
kubectl logs -n dns-system deployment/bindy --tail=50
# Delete and recreate instance
kubectl delete bind9instance primary-dns -n dns-system
kubectl apply -f instance.yaml
DNSZone Issues
No Instances Match Selector
Symptom: DNSZone status shows “No Bind9Instances matched selector”
Diagnosis:
kubectl get bind9instances -n dns-system --show-labels
kubectl get dnszone example-com -n dns-system -o yaml | yq '.spec.instanceSelector'
Solution:
# Verify labels on instances
kubectl label bind9instance primary-dns dns-role=primary -n dns-system
# Or update zone selector
kubectl edit dnszone example-com -n dns-system
Zone File Not Created
Symptom: Zone exists but no zone file in BIND9
Diagnosis:
kubectl exec -n dns-system deployment/primary-dns -- ls -la /var/lib/bind/zones/
kubectl logs -n dns-system deployment/bindy | grep "example-com"
Solution:
# Check if zone reconciliation succeeded
kubectl describe dnszone example-com -n dns-system
# Trigger reconciliation by updating zone
kubectl annotate dnszone example-com reconcile=true -n dns-system
DNS Record Issues
Record Not Matching DNSZone
Symptom: Controller logs show “No matching DNSZone found” errors for a record that should match
Example Error:
ERROR No DNSZone matched label selector for record 'www-example' in namespace 'dns-system'
Root Cause: Mismatch between record labels and DNSZone label selectors.
Diagnosis:
# Check the record's labels
kubectl get arecord www-example -n dns-system -o yaml | yq '.metadata.labels'
# Check available DNSZones and their selectors
kubectl get dnszones -n dns-system
# Check the DNSZone's label selector
kubectl get dnszone example-com -n dns-system -o yaml | yq '.spec.recordSelector'
Understanding the Problem:
DNS records are matched to DNSZones using label selectors. The DNSZone defines which records it should include using spec.recordSelector.
Common mistakes:
- Record has label
zone: internal-localbut DNSZone expectszone: internal.local - Record missing the required label entirely
- DNSZone selector doesn’t match any records
- Typo in label key or value
Solution:
Ensure record labels match the DNSZone’s selector.
Example:
Given this DNSZone:
apiVersion: bindy.firestoned.io/v1beta1
kind: DNSZone
metadata:
name: example-com
namespace: dns-system
spec:
zoneName: example.com
recordSelector:
matchLabels:
zone: example.com # ← Selector expects this label
Wrong:
# Record without matching label
apiVersion: bindy.firestoned.io/v1beta1
kind: ARecord
metadata:
name: www-example
namespace: dns-system
# ✗ Missing labels!
spec:
name: www
ipv4Address: "192.0.2.1"
Correct:
# Record with matching label
apiVersion: bindy.firestoned.io/v1beta1
kind: ARecord
metadata:
name: www-example
namespace: dns-system
labels:
zone: example.com # ✓ Matches DNSZone selector
spec:
name: www
ipv4Address: "192.0.2.1"
Verification:
# After fixing, check the record reconciles
kubectl describe arecord www-example -n dns-system
# Check which DNSZone the record matched
kubectl get arecord www-example -n dns-system -o yaml | yq '.status.zone'
# Should see no errors in events
kubectl get events -n dns-system --sort-by='.lastTimestamp' | tail -10
See the Label Selectors Guide for more details.
Record Not Appearing in Zone
Symptom: ARecord created but not in zone file
Diagnosis:
# Check record status
kubectl get arecord www-example -n dns-system -o yaml
# Check zone file
kubectl exec -n dns-system deployment/primary-dns -- cat /var/lib/bind/zones/example.com.zone
Solution:
# Verify record has the correct labels
kubectl get arecord www-example -n dns-system -o yaml | yq '.metadata.labels'
# Check DNSZone selector
kubectl get dnszone example-com -n dns-system -o yaml | yq '.spec.recordSelector'
# Update labels to match selector
kubectl label arecord www-example zone=example.com -n dns-system --overwrite
DNS Query Not Resolving
Symptom: dig/nslookup fails to resolve
Diagnosis:
# Get DNS service IP
SERVICE_IP=$(kubectl get svc primary-dns -n dns-system -o jsonpath='{.spec.clusterIP}')
# Test query
dig @$SERVICE_IP www.example.com
# Check BIND9 logs
kubectl logs -n dns-system -l instance=primary-dns | tail -20
Solutions:
- Record doesn’t exist:
kubectl get arecords -n dns-system
kubectl apply -f record.yaml
- Zone not loaded:
kubectl logs -n dns-system -l instance=primary-dns | grep "loaded serial"
- Network policy blocking:
kubectl get networkpolicies -n dns-system
Zone Transfer Issues
Secondary Not Receiving Transfers
Symptom: Secondary instance not getting zone updates
Diagnosis:
# Check secondary logs
kubectl logs -n dns-system -l dns-role=secondary | grep transfer
# Check if zone has secondary IPs configured
kubectl get dnszone example-com -n dns-system -o jsonpath='{.status.secondaryIps}'
# Check if secondaries are discovered
kubectl get bind9instance -n dns-system -l role=secondary -o jsonpath='{.items[*].status.podIP}'
Automatic Configuration:
As of v0.1.0, Bindy automatically discovers secondary IPs and configures zone transfers:
- Secondary pods are discovered via Kubernetes API using label selectors (
role=secondary) - Primary zones are configured with
also-notifyandallow-transferdirectives - Secondary IPs are stored in
DNSZone.status.secondaryIpsfor tracking - When secondary pods restart/reschedule and get new IPs, zones are automatically updated
Manual Verification:
# Check if zone has secondary IPs in status
kubectl get dnszone example-com -n dns-system -o yaml | yq '.status.secondaryIps'
# Expected output: List of secondary pod IPs
# - 10.244.1.5
# - 10.244.2.8
# Verify zone configuration on primary
kubectl exec -n dns-system deployment/primary-dns -- \
curl -s localhost:8080/api/zones/example.com | jq '.alsoNotify, .allowTransfer'
If Automatic Configuration Fails:
-
Verify secondary instances are labeled correctly:
kubectl get bind9instance -n dns-system -o yaml | yq '.items[].metadata.labels' # Expected labels for secondaries: # role: secondary # cluster: <cluster-name> -
Check DNSZone reconciler logs:
kubectl logs -n dns-system deployment/bindy | grep "secondary" -
Verify network connectivity:
# Test AXFR from secondary to primary kubectl exec -n dns-system deployment/secondary-dns -- \ dig @primary-dns-service AXFR example.com
Recovery After Secondary Pod Restart:
When secondary pods are rescheduled and get new IPs:
- Detection: Reconciler automatically detects IP change within 5-10 minutes (next reconciliation)
- Update: Zones are deleted and recreated with new secondary IPs
- Transfer: Zone transfers resume automatically with new IPs
Manual Trigger (if needed):
# Force reconciliation by updating zone annotation
kubectl annotate dnszone example-com -n dns-system \
reconcile.bindy.firestoned.io/trigger="$(date +%s)" --overwrite
Performance Issues
High Query Latency
Symptom: DNS queries taking too long
Diagnosis:
# Test query time
time dig @$SERVICE_IP example.com
# Check resource usage
kubectl top pods -n dns-system -l instance=primary-dns
Solutions:
- Increase resources:
spec:
resources:
limits:
cpu: "1000m"
memory: "1Gi"
- Add more replicas:
spec:
replicas: 3
- Enable caching (if appropriate for your use case)
RBAC Issues
Forbidden Errors in Logs
Symptom: Controller logs show “Forbidden” errors
Diagnosis:
kubectl logs -n dns-system deployment/bindy | grep Forbidden
# Check permissions
kubectl auth can-i create deployments \
--as=system:serviceaccount:dns-system:bindy \
-n dns-system
Solution:
# Reapply RBAC
kubectl apply -f deploy/rbac/
# Verify ClusterRoleBinding
kubectl get clusterrolebinding bindy-rolebinding -o yaml
# Restart controller
kubectl rollout restart deployment/bindy -n dns-system
Next Steps
- Debugging Guide - Detailed debugging procedures
- FAQ - Frequently asked questions
- Logging - Log analysis