-
Notifications
You must be signed in to change notification settings - Fork 41.2k
Closed
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.sig/autoscalingCategorizes an issue or PR as relevant to SIG Autoscaling.Categorizes an issue or PR as relevant to SIG Autoscaling.
Description
What happened:
I have deployment with following upgrade strategy
strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate
and hpa:
spec:
maxReplicas: 3
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: xxxx
targetCPUUtilizationPercentage: 80
When sending patch command to update container image, (current replicas is 1), following happended
- new pod created and getting ready
- after the new pod ready, the old pod is terminating.
- hpa make a judgement call to set desired replica to 2
- after a while, hpa readjust to desired replica back to 1
What you expected to happen:
- new pod created and getting ready
- after the new pod ready, the old pod is terminating.
How to reproduce it (as minimally and precisely as possible):
- create a deployment with replica set to 1, maxSurge 1, maxUnavailable: 0
- create a hpa targeting the deployment with replica 1 from 3, 80% cpu target
- change container image via kubectl edit or other approach
- keep checking pod count changes
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
): - Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
Joseph-Irving, cxmcc, cmaster11, pratikvasa, OlegRakovich and 19 more
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.sig/autoscalingCategorizes an issue or PR as relevant to SIG Autoscaling.Categorizes an issue or PR as relevant to SIG Autoscaling.