p0 kubeconfig
Overview
Request just-in-time access to an AWS EKS cluster and automatically configure your local kubectl context.
Basic Usage
p0 kubeconfig \
--cluster <CLUSTER_ID> \
--role <ROLE_KIND>/<ROLE_NAME> \
[--resource <Kind> / <Namespace> / <Name>] \
[--reason "<REASON>"] \
[--requested-duration "<DURATION>"]
Prerequisites
Logged-in user
k8s cluster deployed in GCP or AWS
k8s provider configured in integrations
Options & Flags
Flag
Required?
Description
--cluster <string>
Yes
The cluster ID as registered in P0 (not the ARN).
--role <string>
Yes
The Kubernetes RBAC role to request. Must be one of:
• ClusterRole/<roleName>
• CuratedRole/<roleName>
• Role/<namespace>/<roleName>
--resource <string>
No
Scope access to a specific resource or type. Must use spaces around slashes:
• <Kind> / <Namespace> / <Name>
• <Kind> / <Name>
--reason "<string>"
No
A free-form explanation for audit purposes (e.g. "Debugging DNS issues").
--requested-duration "<string>"
No
How long you need access. Supported formats:
• 10 minutes
• 2 hours
• 5 days
• 1 week
--help
No
Show built-in help text for p0 kubeconfig.
Examples
1. Cluster-wide admin for 2 hours
$ p0 kubeconfig \
--cluster my-cluster \
--role ClusterRole/cluster-admin \
--requested-duration "2 hours"
Sample output:
Fetching cluster integration…
Requesting access ClusterRole/cluster-admin on cluster my-cluster…
Waiting for AWS resources to be provisioned and updating kubeconfig for EKS…
Added new context arn:aws:eks:us-west-2:123456789012:cluster/my-cluster to ~/.kube/config
Switched to context arn:aws:eks:us-west-2:123456789012:cluster/my-cluster
Access granted and kubectl configured successfully. Re-run this command to refresh access if credentials expire.
2. Read pods in the staging namespace
$ p0 kubeconfig \
--cluster staging-cluster \
--role Role/staging/developer \
--resource Pod / staging / *
Sample output:
Fetching cluster integration…
Requesting access Role/staging/developer on namespace staging (Pods)…
Waiting for AWS resources to be provisioned and updating kubeconfig for EKS…
Added new context arn:aws:eks:us-east-1:987654321098:cluster/staging-cluster to ~/.kube/config
Switched to context arn:aws:eks:us-east-1:987654321098:cluster/staging-cluster
Access granted and kubectl configured successfully.
3. Scoped view of a specific Deployment with reason
$ p0 kubeconfig \
--cluster production \
--role CuratedRole/view-deployments \
--resource Deployment / prod / frontend-api \
--reason "Verify rollout status"
Sample output:
Fetching cluster integration…
Requesting access CuratedRole/view-deployments on Deployment/frontend-api in prod…
Waiting for AWS resources to be provisioned and updating kubeconfig for EKS…
Added new context arn:aws:eks:us-west-2:123456789012:cluster/production to ~/.kube/config
Switched to context arn:aws:eks:us-west-2:123456789012:cluster/production
Access granted and kubectl configured successfully.
Refreshing Access
When credentials expire, simply re-run the same command (all flags are remembered):
p0 kubeconfig --cluster my-cluster --role ClusterRole/cluster-admin
Last updated