| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
1.3 / 3.3 Upgrades
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Refactored the 3.2 upgrade common files out to a path that does not
indicate they are strictly for 3.2.
3.3 upgrade then becomes a relatively small copy of the byo entry point,
all calling the same code as 3.2 upgrade.
Thus far there are no known 3.3 specific upgrade tasks. In future we
will likely want to allow hooks out to version specific pre/upgrade/post
tasks.
Also fixes a bug where the handlers were not restarting
nodes/openvswitch containers doing upgrades, due to a change in Ansible
2+.
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
We now handle the two pieces of upgrade that require a node evac in the
same play. (docker, and node itself)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Now more of a generic upgrade playbook to go to the latest Docker
version.
Added support for docker_version inventory variable, in which case we
disable the check for >= 1.10 and make sure you're running at least the
specified version. (we will not downgrade you to the requested version
however, this is much too complicated)
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The tasks were attempting to stop/start etcd, which would be fine on the
stop but on start could actually kick the non-containerized etcd service
which happens to be layed down even though it's unused.
When the service was requested to start again it would claim the port
embedded etcd needs and the master would then fail to come up.
Instead use the correct etcd_container service.
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds a separate playbook for Docker 1.10 upgrade that can be run
standalone on a pre-existing 3.2 cluster. The upgrade will take each
node out of rotation, and remove *all* containers and images on it, as
this is reportedly faster and more storage efficient than performing the
in place 1.10 upgrade.
This process is integrated into the 3.1 to 3.2 upgrade process.
Normal config playbooks now become 3.2 only, and require Docker 1.10.
Users of older environments will have to use an appropriate
openshift-ansible version.
Config playbooks no longer are in the business of upgrading or
downgrading docker.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Fix cli_docker_additional_registries being erased during upgrade.
|
| |
| |
| |
| |
| |
| | |
Legacy options (cli_*) were not being migrated during upgrade. Add the
oo_all_hosts group, and migrate the facts as we do in the normal cluster
playbooks.
|
|/
|
|
|
|
| |
- Reconfigures masters to use port 8053 for SkyDNS
- Runs openshift_node_dnsmasq role on all nodes
- Reconfigures node to use dnsmasq
|
| |
|
|
|
|
|
| |
openshift_facts is currently failing because it doesn't properly set up groups
after the proxy changes we made. This fixes that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- refactors the docker role to push generic config into docker role and wrap
openshift specific variables into an openshift_docker role and it's
dependent openshift_docker_facts role
- adds support for setting --confirm-def-push flag (Resolves
https://github.com/openshift/openshift-ansible/issues/1014)
- moves docker related facts from common/node roles to a new docker role
- renames cli_docker_* role varialbes to openshift_docker-* (maintaining
backward compatibility)
- update role dependencies to pull in openshift_docker conditionally based on
is_containerized
- remove playbooks/common/openshift-docker since the docker role is now
conditionally included
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Add missing atomic- and openshift-enterprise
|
| |
| |
| |
| |
| |
| | |
Some checks related to *enterprise deployments were still only
looking for "enterprise" deployment_type. Update them to
cover also atomic-enterprise and openshift-enterprise deployment types.
|
|\ \
| |/
|/| |
Refactor storage options
|
| | |
|
|/ |
|
| |
|
|
|
|
|
| |
Replace fail with a warn and prompt if running ansible from a host that will be rebooted.
Re-organize playbooks.
|
|
|
|
|
|
| |
Blocks running ansible on a host that will be restarted.
Can restart just services, or optionally the full system.
|
| |
|