diff options
author | Devan Goodwin <dgoodwin@redhat.com> | 2016-11-21 11:29:07 -0400 |
---|---|---|
committer | Devan Goodwin <dgoodwin@redhat.com> | 2016-11-21 11:29:07 -0400 |
commit | 27c0a29f266def67e22667fef6823062b8167be5 (patch) | |
tree | 06ba5c73e85812324ea4f34ebfe4da3e0030cf59 /playbooks/common/openshift-master/restart.yml | |
parent | 6782fa3c9e01b02e6a29e676f6bbe53d040b9708 (diff) | |
download | openshift-27c0a29f266def67e22667fef6823062b8167be5.tar.gz openshift-27c0a29f266def67e22667fef6823062b8167be5.tar.bz2 openshift-27c0a29f266def67e22667fef6823062b8167be5.tar.xz openshift-27c0a29f266def67e22667fef6823062b8167be5.zip |
Fix rare failure to deploy new registry/router after upgrade.
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
Diffstat (limited to 'playbooks/common/openshift-master/restart.yml')
0 files changed, 0 insertions, 0 deletions