# p0 kubeconfig

**Prerequisites**

| Component          | Check Command                | Requirement           |
| ------------------ | ---------------------------- | --------------------- |
| **P0 CLI**         | `p0 version`                 | v0.9.0 or later       |
| **Authentication** | `p0 login <ORG_ID>`          | Browser confirmation  |
| **AWS CLI + EKS**  | `aws --version aws eks help` | AWS CLI v2.x with EKS |
| **kubectl**        | `kubectl version --client`   | v1.24 or higher       |

***

**Common Errors & Resolutions**

| Error Scenario                         | Symptom / Cause                                                                                  | Resolution                                                                                        |
| -------------------------------------- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------- |
| **Missing Dependencies**               | AWS CLI/EKS or kubectl not on `$PATH`                                                            | Install AWS CLI v2 (or add EKS plugin), install kubectl and move to `/usr/local/bin/`             |
| **Invalid Role Argument**              | `--role` not in `ClusterRole / <name>`, `CuratedRole / <name>`, or `Role / <ns> / <name>` format | Use exact slash syntax (one or two slashes):`--role ClusterRole/cluster-admin`                    |
| **Invalid Resource Argument**          | Missing spaces around slashes                                                                    | Include spaces: `--resource Pod / staging / my-pod-123` or `--resource Deployment / frontend-app` |
| **Cluster Integration Lookup Fails**   | “Failed to fetch cluster integration for `<cluster-id>`”                                         | Verify onboarding in P0 Dashboard → Kubernetes, check network/org ID, rerun with `P0_LOG=debug`   |
| **AWS Credential Conflicts**           | Env vars override P0 profile                                                                     | `unset AWS_ACCESS_KEY_IDunset AWS_SECRET_ACCESS_KEY`                                              |
| **`aws eks update-kubeconfig` Fails**  | Missing plugin, wrong region/cluster in ARN, or expired creds                                    | Test manually:`aws eks update-kubeconfig …aws eks list-clusters …`Ensure profile exists           |
| **`kubectl config use-context` Fails** | Context not found or kubectl misconfigured                                                       | `kubectl config get-contexts`Check `~/.kube/config` and file permissions                          |
| **ARN Parsing Errors**                 | Stored ARN not in `arn:aws:eks:<region>:<acctId>:cluster/<name>` form                            | Correct the ARN in P0 Dashboard under Kubernetes integration                                      |
| **Pending Approval/Timeouts**          | Stuck on “Requesting access…” or “Waiting for AWS resources…”                                    | Approve in Slack/UI, wait 30–60s for IAM propagation, then retry                                  |

***

**Diagnostics & Tips**

* Recheck versions: `p0 version`, `aws --version`, `kubectl version --client`.
* Inspect AWS profiles: `aws configure list-profiles` & `cat ~/.aws/credentials`.
* Manual tests:
  * `aws eks list-clusters --region <region> --profile <profile>`
  * `kubectl config view`
  * `grep -A2 "<clusterARN>" ~/.kube/config`

**Escalation Paths**

* Misconfigured integrations → Your P0 admin
* Persistent API/auth failures → P0 Support
* CLI bugs → File a GitHub issue

**Resources**

* Email: <support@p0.dev>
* Slack: your org’s **#p0-help** channel
