Basic troubleshooting
This page provides commands and procedures for diagnosing issues with your NetFoundry Self-Hosted environment.
Check deployment status
Run nf-status to get a quick overview of all NetFoundry deployments across the ziti, support, and cert-manager
namespaces:
nf-status
All deployments should show the expected replica count in the READY column (e.g., 1/1). If any show 0/1,
investigate further using the commands below.
Check pod health
List pods in the Ziti namespace:
kubectl get pods -n ziti
List pods in the support namespace:
kubectl get pods -n support
Healthy pods show Running with all containers ready. Common unhealthy states:
| Status | Meaning |
|---|---|
Pending | Pod cannot be scheduled — usually insufficient CPU, memory, or storage |
CrashLoopBackOff | Container is crashing repeatedly — check logs for the root cause |
ImagePullBackOff | Cannot pull the container image — check registry credentials and network access |
Error | Container exited with an error — check logs |
To get details on why a pod is unhealthy:
kubectl describe pod <pod-name> -n ziti
Check Kubernetes events
Events reveal scheduling failures, volume issues, and other cluster-level problems. View recent events sorted by time:
kubectl get events -n ziti --sort-by='.metadata.creationTimestamp'
kubectl get events -n support --sort-by='.metadata.creationTimestamp'
To watch events in real time:
kubectl get events --watch -n ziti
Check services and external access
Verify that LoadBalancer services have been assigned external addresses:
kubectl get services -n ziti
If the EXTERNAL-IP column shows <pending>, the cluster's load balancer provisioner may not be configured or may be
unable to allocate an address.
View logs
Ziti controller logs
kubectl logs -f deployment/ziti-controller -n ziti
Ziti router logs
kubectl logs -f deployment/ziti-router-1 -n ziti
Support stack component logs
kubectl logs -f deployment/grafana -n support
kubectl logs -f deployment/logstash -n support
kubectl logs -f deployment/rabbitmq -n support
View previous container logs (after a crash)
If a container has restarted, view the logs from the previous instance:
kubectl logs <pod-name> -n ziti --previous
Restart a component
If a component is in a bad state after ruling out configuration issues, restart its deployment:
kubectl rollout restart deployment ziti-controller -n ziti
kubectl rollout restart deployment ziti-router-1 -n ziti
For support stack components:
kubectl rollout restart deployment grafana -n support
kubectl rollout restart deployment logstash -n support
Verify Ziti controller connectivity
Test that the controller API is reachable at its advertise address:
curl --insecure https://<controller-advertise-address>:<port>/version
If this fails, check DNS resolution and firewall rules:
nslookup <controller-advertise-address>
Log into the Ziti controller CLI
Use nf-login to authenticate with the controller and run Ziti CLI commands:
nf-login
Once logged in, you can inspect network state directly:
ziti edge list identities
ziti edge list services
ziti edge list edge-routers
Access the controller pod directly
For advanced debugging, you can exec into the controller pod:
kubectl exec -it deployment/ziti-controller -n ziti -- /bin/bash
Collect diagnostics for NetFoundry support
Run the support bundle command to collect logs and stack dumps into a zip file:
nf-support-bundle
This collects the last 24 hours of logs from all pods in the ziti and support namespaces, plus ziti agent stack
output from each Ziti pod.
For additional diagnostics, pass optional flags:
nf-support-bundle --mem # Include memory statistics
nf-support-bundle --cpu # Include CPU profiling
nf-support-bundle --heap # Include heap profiling
Include the generated support_bundle_*.zip file along with install.log and any kubectl_events_*.log files when
contacting NetFoundry support.