| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| | |
Automatic merge from submit-queue
Updating to always configure api aggregation with installation
This moves the wiring of the aggregator up into the config playbook as we want to enable this by default with an installation.
Resolves https://github.com/openshift/openshift-ansible/issues/5056
|
| | |
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Automatic merge from submit-queue
Do not reconcile in >= 3.7
Starting with 3.7 we use kube's RBAC which happens to do a forceful reconcile at server startup.
Explicit reconciles are not needed anymore.
|
| |/
| |
| |
| |
| |
| |
| |
| | |
Starting with 3.7 we use kube's RBAC which happens to do a forceful reconcile at server startup.
Explicit reconciles are not needed anymore.
Also drop obsolete version checks and simplify 'when' conditional
Signed-off-by: Simo Sorce <simo@redhat.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
juanvallejo/jvallejo/add-health-checks-upgrade-path
Automatic merge from submit-queue
add health checks 3_6,3_7 upgrade path
Related BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1483931
Adds health checks to `upgrade_control_plane` and `upgrade_nodes` in 3_6 and 3_7.
cc @sosiouxme @rhcarvalho @brenton
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
juanvallejo/jvallejo/add-additonal-checks-upgrade-path
Automatic merge from submit-queue
Adding additonal checks upgrade path
Depends on https://github.com/openshift/openshift-ansible/pull/4960
TODO
- Possibly handle `upgrade` playbook context on `etcd_volume` check
cc @sosiouxme @rhcarvalho
|
| | | | |
|
|\ \ \ \
| |_|_|/
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Automatic merge from submit-queue
Cleanup old deployment types
Previously, openshift-ansible supported various
types of deployments using the variable "openshift_deployment_type"
Currently, openshift-ansible only supports two deployment types,
"origin" and "openshift-enterprise".
This commit removes all logic and references to deprecated
deployment types.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously, openshift-ansible supported various
types of deployments using the variable "openshift_deployment_type"
Currently, openshift-ansible only supports two deployment types,
"origin" and "openshift-enterprise".
This commit removes all logic and references to deprecated
deployment types.
|
|\ \ \ \
| |/ / /
|/| | | |
[Proposal] OpenShift-Ansible Playbook Consolidation
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Automatic merge from submit-queue
Increase rate limiting in journald.conf
@sdodson ptal, this is to address issues from https://github.com/openshift/origin/issues/12558
@smarterclayton @stevekuznetsov fyi
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Automatic merge from submit-queue
Make RH subscription more resilient to temporary failures
subscription-manager can sometimes fail because of server side errors.
Manually replaying the command usually works.
So, let’s make openshift-ansible more resilient to temporary failures of
subscription-manager by retrying the failed commands with a maximum of
3 retries.
Here is an example of such sporadic errors:
```
TASK [rhel_subscribe : Retrieve the OpenShift Pool ID] *************************
ok: [lenaic-node-compute-c96e7]
ok: [lenaic-master-bbe09]
ok: [lenaic-node-compute-2976a]
fatal: [lenaic-node-infra-47ba5]: FAILED! => {"changed": false, "cmd": ["subscription-manager", "list", "--available", "--matches=Red Hat OpenShift Container Platform, Premium*", "--pool-only"], "delta": "0:00:07.152650", "end": "2017-04-04 11:24:59.729405", "failed": true, "rc": 70, "start": "2017-04-04 11:24:52.576755", "stderr": "Unable to verify server's identity: (104, 'Connection reset by peer')", "stdout": "", "stdout_lines": [], "warnings": []}
TASK [rhel_subscribe : Determine if OpenShift Pool Already Attached] ***********
skipping: [lenaic-master-bbe09]
skipping: [lenaic-node-compute-2976a]
skipping: [lenaic-node-compute-c96e7]
TASK [rhel_subscribe : fail] ***************************************************
skipping: [lenaic-node-compute-2976a]
skipping: [lenaic-master-bbe09]
skipping: [lenaic-node-compute-c96e7]
TASK [rhel_subscribe : Attach to OpenShift Pool] *******************************
fatal: [lenaic-node-compute-c96e7]: FAILED! => {"changed": true, "cmd": ["subscription-manager", "subscribe", "--pool", "8a85f9814ff0134a014ff43b44095513"], "delta": "0:00:21.421300", "end": "2017-04-04 11:25:20.655873", "failed": true, "rc": 70, "start": "2017-04-04 11:24:59.234573", "stderr": "Unable to verify server's identity: (104, 'Connection reset by peer')", "stdout": "Successfully attached a subscription for: Red Hat OpenShift Container Platform, Premium (1-2 Sockets)", "stdout_lines": ["Successfully attached a subscription for: Red Hat OpenShift Container Platform, Premium (1-2 Sockets)"], "warnings": []}
changed: [lenaic-master-bbe09]
changed: [lenaic-node-compute-2976a]
```
In this example, subscription-manager was failing on some nodes, but not all. Retrying on the failed nodes would have avoided to abandon those nodes.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
subscription-manager can sometimes fail because of server side errors.
Manually replaying the command usually works.
So, let’s make openshift-ansible more resilient to temporary failures of
subscription-manager by retrying the failed commands with a maximum of
3 retries.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
mgugino-upstream-stage/fix-openshift-version-pkg-install
Automatic merge from submit-queue
Only install base openshift package on masters and nodes
Recent refactoring to remove openshift_common resulted
in base openshift rpm's being installed on more hosts
than previous. This situation results in hosts that
would otherwise not need access to openshift repositories
to require them.
This patch set results in only openshift_masters and
openshift_nodes to have the openshift base package installed.
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Recent refactoring to remove openshift_common resulted
in base openshift rpm's being installed on more hosts
than previous. This situation results in hosts that
would otherwise not need access to openshift repositories
to require them.
This patch set results in only openshift_masters and
openshift_nodes to have the openshift base package installed.
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Automatic merge from submit-queue
repoquery bz1482551 followup
Adding retries on the repoqueries I missed in https://github.com/openshift/openshift-ansible/pull/5401
|
| | | | | | | | |
|
|\ \ \ \ \ \ \ \
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Automatic merge from submit-queue
Bug 1491636 - honor openshift_logging_es_ops_nodeselector
https://bugzilla.redhat.com/show_bug.cgi?id=1491636
|
| | | | | | | | | |
|
|\ \ \ \ \ \ \ \ \
| |_|/ / / / / / /
|/| | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Automatic merge from submit-queue
openshift_checks: enable writing results to files
An iteration on how to record check results in a directory structure readable by machines and humans.
Some refactoring of checks and the action plugin to enable writing files
locally about the check operation and results, if the user wants them.
This is aimed at enabling persistent and machine-readable results from
recurring runs of health checks.
Now, rather than trying to build a result hash to return from running
each check, checks can just register what they need to as they're going
along, and the action plugin processes state when the check is done.
Checks can register failures, notes about what they saw, and arbitrary
files to be saved into a directory structure where the user specifies.
If no directory is specified, no files are written.
At this time checks can still return a result hash, but that will likely
be refactored away in the next iteration.
Multiple failures can be registered without halting check execution.
Throwing an exception or returning a hash with "failed" is registered as
a failure.
execute_module now does a little more with the results. Results are
automatically included in notes and written individually as files.
"changed" results are propagated. Some json results are decoded.
A few of the checks were enhanced to use these features; all get some of
the features for free.
Action items:
- [x] Provide a way for user to specify an output directory where they want results written
- [x] Enable a check to register multiple failures and not have to assemble them in result
- [x] Enable a check to register "notes" that will be saved to files but not displayed
- [x] Have module invocations recorded individually as well as in notes
- [x] Enable a check to register files (logs, etc.) from remote host that are to be copied to output dir
- [x] Enable a check to register arbitrary file contents that are to be written to output
- [ ] Take advantage of these features where possible in checks
(Last item done somewhat, more should happen as we go along...)
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Some refactoring of checks and the action plugin to enable writing files
locally about the check operation and results, if the user wants them.
This is aimed at enabling persistent and machine-readable results from
recurring runs of health checks.
Now, rather than trying to build a result hash to return from running
each check, checks can just register what they need to as they're going
along, and the action plugin processes state when the check is done.
Checks can register failures, notes about what they saw, and arbitrary
files to be saved into a directory structure where the user specifies.
If no directory is specified, no files are written.
At this time checks can still return a result hash, but that will likely
be refactored away in the next iteration.
Multiple failures can be registered without halting check execution.
Throwing an exception or returning a hash with "failed" is registered as
a failure.
execute_module now does a little more with the results. Results are
automatically included in notes and written individually as files.
"changed" results are propagated. Some json results are decoded.
A few of the checks were enhanced to use these features; all get some of
the features for free.
|
|\ \ \ \ \ \ \ \ \
| | | | | | | | | |
| | | | | | | | | | |
Fix etcd backup msg error
|
|/ / / / / / / / / |
|
|\ \ \ \ \ \ \ \ \
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Automatic merge from submit-queue
hot fix for env variable resolve
If we use environment variables in our inventory files (and from what I seen we do this everywhere where We deploy OCP) our fact engine ignores env variables so if my path looks like
```
openshift_hosted_registry_routecertificates={"certfile": "{{inventory_dir}}/../files/certs/wildcard.registry.company.local.crt", "keyfile": "{{inventory_dir}}/../files/certs/wildcard.registry.companylocal.key", "cafile":"{{inventory_dir}}/../files/certs/CompanyLocalRootCA.crt"}
openshift_hosted_registry_routehost=containers.registry.comany.local
```
the result is: `/../files/certs/RoSLocalRootCA.crt`
We need to fix our fact set in a long run to read Ansible variables. And it was done in the same way with router certificates already.
|
| | | | | | | | | | |
|
|\ \ \ \ \ \ \ \ \ \
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Automatic merge from submit-queue
Fix registry auth task ordering
Currently, registry authentication credentials are not
produced until after docker systemd service files are
created.
This commit ensures the credentials are
created before the systemd service files to ensure
the proper boolean is set to include the read-only
mount of credentials inside containerized nodes and
masters.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1316341
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Currently, registry authentication credentials are not
produced until after docker systemd service files are
created.
This commit ensures the credentials are
created before the systemd service files to ensure
the proper boolean is set to include the read-only
mount of credentials inside containerized nodes and
masters.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1316341
|
|\ \ \ \ \ \ \ \ \ \ \
| |_|_|/ / / / / / / /
|/| | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Automatic merge from submit-queue
Prometheus role fixes
- Use official prometheus-alert-buffer image
- Add prometheus annotations to service
|
| |/ / / / / / / / /
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
- Use official prometheus-alert-buffer image
- Add prometheus annotations to service
|
|\ \ \ \ \ \ \ \ \ \
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Automatic merge from submit-queue
Always required new variables
Related to https://bugzilla.redhat.com/show_bug.cgi?id=1451023
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Signed-off-by: Steve Milner <smilner@redhat.com>
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Signed-off-by: Steve Milner <smilner@redhat.com>
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Signed-off-by: Steve Milner <smilner@redhat.com>
|
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
Moved the checks for osm_cluster_network_cidr, osm_host_subnet_length,
openshift_portal_net from upgrade to openshift_sanitize_inventory
as we now consider it a required variable for install, updrade, or
scale up.
Signed-off-by: Steve Milner <smilner@redhat.com>
|
|\ \ \ \ \ \ \ \ \ \ \
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | | |
Automatic merge from submit-queue
Port origin-gce roles for cluster setup to copy AWS provisioning
This is a rough cut of the existing origin-gce structure (itself a
refined version of the ref arch). I've removed everything except core
cluster provisioning, image building, and inventory setup. Node groups
are part of the "all at once" provisioning but can be changed.
@kwoodson we should talk on monday, this is me adapting the origin-gce dynamic provisioning to be roughly parallel to openshift_aws. Still some topics we should discuss.
|
| | |_|_|_|/ / / / / /
| |/| | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | | |
This is a rough cut of the existing origin-gce structure (itself a
refined version of the ref arch). I've removed everything except core
cluster provisioning, image building, and inventory setup. Node groups
are part of the "all at once" provisioning but can be changed.
|
|\ \ \ \ \ \ \ \ \ \ \
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | | |
ingvagabund/pull-openshift_master-deps-out-into-a-play
Automatic merge from submit-queue
Pull openshift_master deps out into a play
The `openshift_master` role is called only in a single play. Thus, we can pull out all its dependencies without duplicating all dependency role invocations. Both `lib_openshift` and `lib_os_firewall` are required deps as they defined ansible modules used inside the `openshift_master` role.
I have also rearranged definition of variables so variable used only inside a single role are part of the `include_role` statement.
Atm, we can't use `include_role` due to https://github.com/ansible/ansible/issues/21890
|
| | | | | | | | | | | | |
|
|\ \ \ \ \ \ \ \ \ \ \ \
| |_|_|_|/ / / / / / / /
|/| | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | | |
Automatic merge from submit-queue
update system container cwd
This changes the cwd for the system container to be the base of the openshift-ansible content. This way the playbook can be specified as a relative path, and in the future when we drop the symlinks for various plugins and rely on cwd to find them, this will still work.
Looking through the Dockerfile side of things I noticed that the run script changes directories to WORK_DIR which is the content base, so this change brings the two methods closer together. I was looking for anything that actually wrote to the current directory (which is $HOME at the beginning of the run script) and found one, the vault password. It seemed slightly more robust to write that to a temporary location instead so I tacked on a commit to do that as well.
|
| | | | | | | | | | | | |
|
| | | | | | | | | | | | |
|
|\ \ \ \ \ \ \ \ \ \ \ \
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | | |
Automatic merge from submit-queue
Move sysctl.conf customizations to a separate file
Move them from /etc/sysctl.conf to /etc/sysctl.d/99-openshift.conf
This is a good idea becuase:
1- /etc/sysctl.conf is evaluated later, so it can easily be overwritten by previous customizations
2- It's likely that there is an agent like puppet monitoring this file
3- It's easier to know what's being changed by OpenShift
|
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| | | | | | | | | | | | | |
Move them from /etc/sysctl.conf to /etc/sysctl.d/99-openshift.conf
|
|\ \ \ \ \ \ \ \ \ \ \ \ \
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | | |
Automatic merge from submit-queue
Add `openshift_node_open_ports` to allow arbitrary firewall exposure
It should be possible for an admin to define an arbitrary set of ports
to be exposed on each node that will relate to the cluster function.
This adds a new global variable for the node that supports
Array(Object{'service':<name>,'port':<port_spec>,'cond':<boolean>})
which is the same format accepted by the firewall role.
@sdodson as discussed, open to alternatives. I used this from origin-gce with
openshift_node_open_ports:
- service: Router stats
port: 1936/tcp
- service: Open node ports
port: 9000-10000/tcp
- service: Open node ports
port: 9000-10000/udp
Which then allows me to set firewall rules appropriately.
Alternatives considered:
* Simpler external format (have to parse inputs)
* Additional parameter to role - felt ugly
|
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | |
| | | | | | | | | | | | | | |
It should be possible for an admin to define an arbitrary set of ports
to be exposed on each node that will relate to the cluster function.
This adds a new global variable for the node that supports
Array(Object{'service':<name>,'port':<port_spec>,'cond':<boolean>})
which is the same format accepted by the firewall role.
|