| Age | Commit message (Collapse) | Author |
|
Remove duplicate when key
|
|
|
|
Fix rare failure to deploy new registry/router after upgrade.
|
|
Set nameservers on DHCPv6 event
|
|
fix selinux issues with etcd container
|
|
Refactored to use Ansible systemd module
|
|
Gracefully handle OpenSSL module absence
|
|
etcd upgrade playbook is not currently applicable to embedded etcd in…
|
|
Make it so that we don't relabel /etc/etcd/ (via `:z`) on every run.
Doing this causes systemd to fail accessing /etc/etcd/etcd.conf when
trying to run the systemd unit file on the next run. Convert it from
`:z` to `:ro` since we only need read-only access to the files.
Fixes #2811
|
|
Fixes Bug 1395945
|
|
Fix invalid embedded etcd fact in etcd upgrade playbook.
|
|
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1398549
Was getting a different failure here complaining that openshift was not
in the facts, as we had not loaded facts for the first master during
playbook run. However this check was used recently in
upgrade_control_plane and should be more reliable.
|
|
Should fix #2869
|
|
lhuard1A/fix_list_after_create_on_libvirt_and_openstack
Fix the list done after cluster creation on libvirt and OpenStack
|
|
* Ansible systemd module used in place of service module
* Refactored command tasks which are no longer necessary
* Applying rules from openshift-ansible Best Practices Guide
|
|
Updating docs for Ansible 2.2 requirements
|
|
Verify the presence of dbus python binding
|
|
Merge admission plugin configs
|
|
|
|
The `list.yml` playbooks are using cloud provider specific variables to find
the IPs of the VMs since 82449c6.
Those “cloud provider specific” variables are the ones provided by the dynamic
inventories.
But there was a problem when the `list.yml` playbooks are invoked from the
`launch.yml` ones because, in that case, the inventory is not coming from the
dynamic inventory scripts, but from the `add_host` done inside
`launch_instances.yml`.
Whereas the GCE and AWS `launch_instances.yml` were correctly adding in the
`add_host` the variables used by `list.yml`, libvirt and OpenStack were missing
that.
Fixes #2856
|
|
A dhcp6-change event may happen on nodes running dual stack
IPv4/IPv6 and DHCP, even if Openshift itself doesn't use IPv6.
/etc/resolv.conf needs be adjusted as well in this case.
|
|
Systemd `systemctl show` workaround
|
|
`systemctl show` would exit with RC=1 for non-existent services in v231.
This caused the Ansible systemd module to exit with a failure of running the
`systemctl show` command instead of exiting stating the service was not found.
This change catches both failures on either older or newer versions of systemd.
The change in systemd exit status could be resolved in systemd v232.
https://github.com/systemd/systemd/commit/3dced37b7c2c9a5c733817569d2bbbaa397adaf7
|
|
While the proper fix is to have it installed by default, this commit
will also permit to have a better error message in the case the module
is not present (as running on python 3)
|
|
Update README.md
|
|
add missing dependencies
|
|
Fix issues encountered in mixed environments
|
|
Make os_firewall_manage_iptables run on python3
|
|
containerized.
|
|
Move the values in kube_admission_plugin_config up one level per
the new format from 1.3:
"The kubernetesMasterConfig.admissionConfig.pluginConfig should be moved
and merged into admissionConfig.pluginConfig."
|
|
Refactor os_firewall role
|
|
Modified the error message being checked for
|
|
Added a BYO playbook for configuring NetworkManager on nodes
|
|
In order to do a full install of OpenShfit using the byo/config.yml
playbook, it is currently required that NetworkManager be installed
and configured on the nodes prior to the installation. This playbook
introduces a very simple default configuration that can be used to
install, configure and enable NetworkManager on their nodes.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
|
|
Add hawkular admin cluster role to management admin
|
|
Make the role work on F25 Cloud
|
|
On F24 and earlier, systemctl show always returned 0. On F25, it
return 1 when a service do not exist, and thus the role fail
on Fedora 25 cloud edition.
|
|
It fail with that traceback:
Traceback (most recent call last):
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 273, in <module>
main()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 257, in main
iptables_manager.add_rule(port, protocol)
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 87, in add_rule
self.verify_chain()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 82, in verify_chain
self.create_jump()
File \"/tmp/ansible_ib5gpbsp/ansible_module_os_firewall_manage_iptables.py\", line 142, in create_jump
input_rules = [s.split() for s in output.split('\\n')]
|
|
|
|
Refactor to use Ansible package module
|
|
Only run tuned-adm if tuned exists.
|
|
Fedora Atomic Host does not have tuned installed.
Fixes #2809
|
|
Allow ansible to continue when a node is unaccessible or fails.
|
|
|
|
Router/registry update and re-deploy was recently reordered to
immediately follow control plane upgrade, right before we proceed to
node upgrade.
In some situations (small or single host clusters) it appears possible
that the deployer pods are running when the node in question is
evacuated for upgrade. When the deployer pod dies the deployment is
failed and the router/registry continue running the old version, despite
the deployment config being updated correctly.
This change re-orderes the router/registry upgrade to follow node
upgrade. However for separate control plane upgrade, the router/registry
still occurs at the end. This is because router/registry seems like they
should logically be included in a control plane upgrade, and presumably
the user will not manually launch node upgrade so quickly as to trigger
an evac on the node in question.
Workaround for this problem when it does occur is simply to:
oc deploy docker-registry --latest
|
|
* Remove unneeded tasks duplicated by new module functionality
* Ansible systemd module has 'masked' and 'daemon_reload' options
* Ansible firewalld module has 'immediate' option
|
|
Fix yum/subman version check on Atomic.
|
|
node_dnsmasq -- Set dnsmasq as our only nameserver
|
|
Escape LOGNAME variable according to GCE rules #2736
|
|
|