| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Rename openshift_cfme role to openshift_management
|
| | |
|
| |
| |
| |
| |
| |
| | |
Created by command:
/usr/bin/tito tag --debug --accept-auto-changelog --keep-version --debug
|
|\ \
| | |
| | | |
add missing restart node handler to flannel
|
|/ / |
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Automatic merge from submit-queue.
Switch to configmap leader election on 3.7 upgrade
This change sets the controllerConfig.election.lockName to openshift-master-controllers on a 3.7 upgrade.
This is the default in a new 3.7 cluster. Important excerpt from the docs inside the origin codebase (slightly modified):
There are two modes for lease operation - a legacy mode that directly connects to etcd, and the preferred mode which coordinates on a configmap or endpoint in the kube-system namespace. Because legacy mode and the new mode do not coordinate on the same key, an upgrade must stop all controllers before changing the configuration and starting controllers with the new config.
Signed-off-by: Monis Khan <mkhan@redhat.com>
/assign @smarterclayton @jupierce
/kind bug
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change sets the controllerConfig.election.lockName to
openshift-master-controllers on a 3.7 upgrade.
This is the default in a new 3.7 cluster. Important excerpt from
the docs inside the origin codebase (slightly modified):
There are two modes for lease operation - a legacy mode that
directly connects to etcd, and the preferred mode which coordinates
on a configmap or endpoint in the kube-system namespace. Because
legacy mode and the new mode do not coordinate on the same key, an
upgrade must stop all controllers before changing the configuration
and starting controllers with the new config.
Signed-off-by: Monis Khan <mkhan@redhat.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Automatic merge from submit-queue.
cri-o: use overlay instead of overlay2
overlay2 and overlay are the same driver. Upstream CRI-O is going to
drop any reference to overlay2 and use only overlay.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
| |/
| |
| |
| |
| |
| |
| | |
overlay2 and overlay are the same driver. Upstream CRI-O is going to
drop any reference to overlay2 and use only overlay.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
|
|\ \
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
mgugino-upstream-stage/ensure-docker-restarts-with-iptables
Automatic merge from submit-queue.
Ensure docker is restarted when iptables is restarted
Currently, os_firewall role may run after docker role,
and iptables.service may be restarted. When restarted,
this negatively impacts docker's iptables rules.
This commit ensures that if iptables is restarted,
docker is restarted as well (by systemd)
Fixes: https://github.com/openshift/origin/issues/16709
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, os_firewall role may run after docker role,
and iptables.service may be restarted. When restarted,
this negatively impacts docker's iptables rules.
This commit ensures that if iptables is restarted,
docker is restarted as well (by systemd)
Fixes: https://github.com/openshift/origin/issues/16709
|
|\ \
| | |
| | | |
Stop including origin and ose hosts example file
|
|/ /
| |
| |
| |
| | |
It's a pain keeping these two in sync so just mention the differences as
necessary.
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Automatic merge from submit-queue.
node: make node service PartOf=openvswitch.service when openshift-sdn is used
This reverts commit 7f805f9a0c41477365dd88b0ac73f0d221bd654a.
The commit causes the behavior seen in
https://bugzilla.redhat.com/show_bug.cgi?id=1453113 because openshift-node
is no longer restarted when openvswitch is.
@giuseppe @sdodson @knobunc
RE https://github.com/openshift/openshift-ansible/pull/4213 can we get a more detailed explanation of why the various dependencies are not being restarted correctly?
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit 7f805f9a0c41477365dd88b0ac73f0d221bd654a causes the behavior seen in
https://bugzilla.redhat.com/show_bug.cgi?id=1453113 because openshift-node
is no longer restarted when openvswitch is, due to the change from Requires
to Wants.
Turns out that making the openshift node service PartOf the OVS service
can achieve the same result and ensure openshift-node gets restarted whenever
OVS does, which ensures that networking doesn't break underneath the node.
Suggested by Giuseppe Scrivano
|
| |
| |
| |
| |
| |
| | |
Created by command:
/usr/bin/tito tag --debug --accept-auto-changelog --keep-version --debug
|
|\ \
| |/
|/| |
fix typo for default in etcd
|
|/ |
|
|\
| |
| | |
Bumping version of service catalog image for 3.7
|
| | |
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Automatic merge from submit-queue.
Cfme 4.6
# Description
* Implements support for **CFME 4.6** in OCP 3.7
* **Replaces** the Tech Preview CFME 4.5 release included in OCP 3.6
* Does not support graceful migrations from the CFME 4.5 tech preview release
# References
* [Trello - (5) Integrate CFME 4.6 into OCP Installation](https://trello.com/c/Rzfn5Qa8/380-5-integrate-cfme-46-into-ocp-installation)
Ensure the following RFE/Errors do not happen again
- [x] #4555 - Error creating the CFME user
- [x] #4556 - Error in PV template evaluation
- [x] #4822 - Changing `maxImagesBulkImportedPerRepository` parameter
- [x] #4568 - Add NFS directory support
# Features
Ensure the following features are configurable in the role
- [x] POC deployments can easily default to NFS storage
- [ ] Production/Cloud deployments can use automatic storage providers
- [ ] Able to select between podified vs. external PostgreSQL database (podified uses configured storage mechanism)
- [x] Template resource requests can be overridden for POC deployments
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Update README
* Add parameter docs to inventory examples
* Remove unused graphic
* Update defaults
|
| | | |
|
| | | |
|
|\ \ \
| |_|/
|/| | |
Adding support for an inventory directory/hybrid inventory
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Automatic merge from submit-queue.
Build provision split
Make provisioning steps more reusable
Reorganizing and making some of the plays more
reusable.
Depends-on: https://github.com/openshift/openshift-ansible/pull/5565
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Reorganizing and making some of the plays more
reusable.
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Automatic merge from submit-queue.
Bug 1496271 - Perserve SCC for ES local persistent storage
ES can be modified to use node local persistent storage. This requires changing SCC and is described in docs:
https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html
During an upgrade, SCC defined by the user is ignored. This fix fetches SCC user defined as a fact and adds it to the ES DC which is later used.
Also includes cherrypicked fix for - Bug 1482661 - Preserve ES dc nodeSelector and supplementalGroups
cc @jcantrill
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
ES can be modified to use node local persistent storage. This requires
changing SCC and is described in docs:
https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html
During an upgrade, SCC defined by the user is ignored. This fix fetches
SCC user defined as a fact and adds it to the ES DC which is later used.
|
| | | | |
| | | | |
| | | | |
| | | | | |
(cherry picked from commit 601e35cbf4410972c7fa0a1d3d5c6327b82353ac)
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Automatic merge from submit-queue.
nfs, lb, and groups for checks
Checks have been using the byo group names for determining whether they need to be active or not. Now that everything is running through common initialization, stop assuming byo names and start referring to the common ones.
As a follow-on [bugfix](https://bugzilla.redhat.com/show_bug.cgi?id=1496760), run docker checks only where docker will be: nodes, and containerized master/etcd. We specifically don't want to run against lb or nfs, but a whitelist approach is used.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
fixes bug 1496760
https://bugzilla.redhat.com/show_bug.cgi?id=1496760
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Automatic merge from submit-queue.
Add logging es prometheus endpoint
This PR adds changes to add a prometheus endpoint to the logging elasticsearch pod
|
| | | | | | | |
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
wozniakjan/logging/elasticsearch/honor_es_cpu_settings
Automatic merge from submit-queue.
logging: honor openshift_logging_es_cpu_limit
PR https://github.com/openshift/openshift-ansible/pull/3509 has removed any usage of `openshift_logging_es_cpu_limit`.
Currently, the `openshift_logging_elasticsearch_cpu_limit` is either default '1000m' or derived from `openshift_logging_es_ops_cpu_limit` but if user sets the `openshift_logging_es_cpu_limit` in the inventory as documented, its value is ignored.
This PR fixes the issue by setting `openshift_logging_elasticsearch_cpu_limit=openshift_logging_es_cpu_limit`
and when the role is included as -ops, it overrides this setting with `openshift_logging_es_ops_cpu_limit`.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
PR https://github.com/openshift/openshift-ansible/pull/3509 has removed any
usage of `openshift_logging_es_cpu_limit`.
Currently, the `openshift_logging_elasticsearch_cpu_limit` is either default
'1000m' or derived from `openshift_logging_es_ops_cpu_limit` but if user sets
the `openshift_logging_es_cpu_limit` in the inventory as documented, its value
is ignored.
This PR fixes the issue by trying to set
openshift_logging_elasticsearch_cpu_limit=openshift_logging_es_cpu_limit
And including the role as -ops overrides this setting.
|
|\ \ \ \ \ \ \ \
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
mgugino-upstream-stage/limit-openshift-version-hosts
Automatic merge from submit-queue.
Limit hosts that run openshift_version role
Currently, the openshift_version role is run against
the oo_all_hosts group. This causes the dependencies,
such as openshift_docker and docker, to be run against
host groups that were not intended, such as nfs.
This commit explicitly limits the openshift_version
role to run only against masters, nodes, and etcd
host groups.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1497144
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Currently, the openshift_version role is run against
the oo_all_hosts group. This causes the dependencies,
such as openshift_docker and docker, to be run against
host groups that were not intended, such as nfs.
This commit explicitly limits the openshift_version
role to run only against masters, nodes, and etcd
host groups.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1497144
|
|\ \ \ \ \ \ \ \ \
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Automatic merge from submit-queue.
Ensure docker service started prior to credentials
Currently, authenticated registry credentials
are requested before docker might be started in
the docker role.
This commit moves the relevant registry credential
tasks to after docker is started.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1316341
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Currently, authenticated registry credentials
are requested before docker might be started in
the docker role.
This commit moves the relevant registry credential
tasks to after docker is started.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1316341
|
|\ \ \ \ \ \ \ \ \ \
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Automatic merge from submit-queue.
Removing setting pvc size and dynamic to remove looped var setting
If we don't set openshift_logging_es_pvc_size but have `openshift_logging_es_pvc_dynamic=True` we see the variable openshift_logging_elasticsearch_pvc_size is set recursively as itself.
Addresses:
https://bugzilla.redhat.com/show_bug.cgi?id=1495150
https://bugzilla.redhat.com/show_bug.cgi?id=1496202
|
| | | | | | | | | | | |
|