| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\ |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
We now handle the two pieces of upgrade that require a node evac in the
same play. (docker, and node itself)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Now more of a generic upgrade playbook to go to the latest Docker
version.
Added support for docker_version inventory variable, in which case we
disable the check for >= 1.10 and make sure you're running at least the
specified version. (we will not downgrade you to the requested version
however, this is much too complicated)
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The tasks were attempting to stop/start etcd, which would be fine on the
stop but on start could actually kick the non-containerized etcd service
which happens to be layed down even though it's unused.
When the service was requested to start again it would claim the port
embedded etcd needs and the master would then fail to come up.
Instead use the correct etcd_container service.
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds a separate playbook for Docker 1.10 upgrade that can be run
standalone on a pre-existing 3.2 cluster. The upgrade will take each
node out of rotation, and remove *all* containers and images on it, as
this is reportedly faster and more storage efficient than performing the
in place 1.10 upgrade.
This process is integrated into the 3.1 to 3.2 upgrade process.
Normal config playbooks now become 3.2 only, and require Docker 1.10.
Users of older environments will have to use an appropriate
openshift-ansible version.
Config playbooks no longer are in the business of upgrading or
downgrading docker.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Fix cli_docker_additional_registries being erased during upgrade.
|
| |
| |
| |
| |
| |
| | |
Legacy options (cli_*) were not being migrated during upgrade. Add the
oo_all_hosts group, and migrate the facts as we do in the normal cluster
playbooks.
|
|/
|
|
|
|
| |
- Reconfigures masters to use port 8053 for SkyDNS
- Runs openshift_node_dnsmasq role on all nodes
- Reconfigures node to use dnsmasq
|
| |
|
|
|
|
|
| |
openshift_facts is currently failing because it doesn't properly set up groups
after the proxy changes we made. This fixes that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- refactors the docker role to push generic config into docker role and wrap
openshift specific variables into an openshift_docker role and it's
dependent openshift_docker_facts role
- adds support for setting --confirm-def-push flag (Resolves
https://github.com/openshift/openshift-ansible/issues/1014)
- moves docker related facts from common/node roles to a new docker role
- renames cli_docker_* role varialbes to openshift_docker-* (maintaining
backward compatibility)
- update role dependencies to pull in openshift_docker conditionally based on
is_containerized
- remove playbooks/common/openshift-docker since the docker role is now
conditionally included
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Add missing atomic- and openshift-enterprise
|
| |
| |
| |
| |
| |
| | |
Some checks related to *enterprise deployments were still only
looking for "enterprise" deployment_type. Update them to
cover also atomic-enterprise and openshift-enterprise deployment types.
|
|\ \
| |/
|/| |
Refactor storage options
|
| | |
|
|/ |
|
| |
|
|
|
|
|
| |
Replace fail with a warn and prompt if running ansible from a host that will be rebooted.
Re-organize playbooks.
|
|
|
|
|
|
| |
Blocks running ansible on a host that will be restarted.
Can restart just services, or optionally the full system.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
- Move debug_level into vars.yml and byo inventory
- change variables in cluster_hosts.yml to be g_* and update playbooks to use
those values directly instead of setting them indirectly
- added a new g_all_hosts entry in cluster_hosts to use in the update playbook
instead of unioning all host types within the playbook
- added a cluster_hosts.yml for the byo playbook
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
- modify evaluate host to set oo_nodes_to_config to a new variable
g_new_nodes_group if defined rather than g_nodes_group and also skip adding
the master when g_new_nodes_group is set.
- Remove byo specific naming from playbooks/common/openshift-cluster/scaleup.yml
and created a new playbooks/byo/openshift-cluster/scaleup.yml playbook.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Split playbooks into two, one for 3.0 minor upgrades and one for 3.0 to 3.1
upgrades
- Move upgrade playbooks to common/openshift/cluster/upgrades from adhoc
- Added a byo wrapper playbooks to set the groups based on the byo
conventions, other providers will need similar playbooks added eventually
- installer wrapper updates for refactored upgrade playbooks
- call new 3.0 to 3.1 upgrade playbook
- various fixes for edge cases I hit with a really old config laying
around.
- fix output of host facts to show connect_to value.
|
| |
|
| |
|
|\
| |
| | |
Set loglevel=2 as our default across the board
|