diff options
13 files changed, 51 insertions, 22 deletions
diff --git a/README_CONTAINER_IMAGE.md b/README_CONTAINER_IMAGE.md index f62fc2ab9..35e057af3 100644 --- a/README_CONTAINER_IMAGE.md +++ b/README_CONTAINER_IMAGE.md @@ -2,7 +2,7 @@ The [Dockerfile](Dockerfile) in this repository uses the [playbook2image](https://github.com/aweiteka/playbook2image) source-to-image base image to containerize `openshift-ansible`. The resulting image can run any of the provided playbooks. -**Note**: at this time there are known issues that prevent to run this image for installation/upgrade purposes from within one of the hosts that is also an installation target at the same time: if the playbook you want to run attempts to manage the docker daemon and restart it (like install/upgrade playbooks do) this would kill the container itself during its operation. +**Note**: at this time there are known issues that prevent to run this image for installation/upgrade purposes (i.e. run one of the config/upgrade playbooks) from within one of the hosts that is also an installation target at the same time: if the playbook you want to run attempts to manage the docker daemon and restart it (like install/upgrade playbooks do) this would kill the container itself during its operation. ## Build @@ -11,7 +11,7 @@ To build a container image of `openshift-ansible`: 1. Using standalone **Docker**: cd openshift-ansible - docker build -t openshift-ansible . + docker build -t openshift/openshift-ansible . 1. Using an **OpenShift** build: @@ -20,15 +20,15 @@ To build a container image of `openshift-ansible`: ## Usage -The base image provides several options to control the behaviour of the containers. For more details on these options see the [playbook2image](https://github.com/aweiteka/playbook2image) documentation. +The `playbook2image` base image provides several options to control the behaviour of the containers. For more details on these options see the [playbook2image](https://github.com/aweiteka/playbook2image) documentation. At the very least, when running a container using an image built this way you must specify: -1. The **playbook** to run. This is set using the `PLAYBOOK_FILE` environment variable. 1. An **inventory** file. This can be mounted inside the container as a volume and specified with the `INVENTORY_FILE` environment variable. Alternatively you can serve the inventory file from a web server and use the `INVENTORY_URL` environment variable to fetch it. 1. **ssh keys** so that Ansible can reach your hosts. These should be mounted as a volume under `/opt/app-root/src/.ssh` +1. The **playbook** to run. This is set using the `PLAYBOOK_FILE` environment variable. If you don't specify a playbook the [`openshift_facts`](playbooks/byo/openshift_facts.yml) playbook will be run to collecting and show facts about your OpenShift environment. -Here is an example of how to run a containerized `openshift-ansible` playbook that will check the expiration dates of OpenShift's internal certificates using the [`openshift_certificate_expiry` role](../../roles/openshift_certificate_expiry). The inventory and ssh keys are mounted as volumes (the latter requires setting the uid in the container and SELinux label in the key file via `:Z` so they can be accessed) and the `PLAYBOOK_FILE` environment variable is set to point to an example certificate check playbook that is already part of the image: +Here is an example of how to run a containerized `openshift-ansible` playbook that will check the expiration dates of OpenShift's internal certificates using the [`openshift_certificate_expiry` role](roles/openshift_certificate_expiry). The inventory and ssh keys are mounted as volumes (the latter requires setting the uid in the container and SELinux label in the key file via `:Z` so they can be accessed) and the `PLAYBOOK_FILE` environment variable is set to point to an example certificate check playbook that is already part of the image: docker run -u `id -u` \ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ @@ -36,6 +36,6 @@ Here is an example of how to run a containerized `openshift-ansible` playbook th -e INVENTORY_FILE=/tmp/inventory \ -e OPTS="-v" \ -e PLAYBOOK_FILE=playbooks/certificate_expiry/default.yaml \ - openshift-ansible + openshift/openshift-ansible -The [playbook2image examples](https://github.com/aweiteka/playbook2image/tree/master/examples) provide additional information on how to use a built image. +The [playbook2image examples](https://github.com/aweiteka/playbook2image/tree/master/examples) provide additional information on how to use an image built from it like this one. diff --git a/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml b/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml index e4db65b02..86f5a36ca 100644 --- a/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml +++ b/playbooks/byo/openshift-cluster/upgrades/v3_5/upgrade.yml @@ -50,6 +50,8 @@ tags: - pre_upgrade +# Note: During upgrade the openshift excluder is not unexcluded inside the initialize_openshift_version.yml play. +# So it is necassary to run the play after running disable_excluder.yml. - include: ../../../../common/openshift-cluster/initialize_openshift_version.yml tags: - pre_upgrade diff --git a/playbooks/common/openshift-cluster/config.yml b/playbooks/common/openshift-cluster/config.yml index 82f711f40..ff4c4b0d7 100644 --- a/playbooks/common/openshift-cluster/config.yml +++ b/playbooks/common/openshift-cluster/config.yml @@ -60,3 +60,7 @@ - include: openshift_hosted.yml tags: - hosted + +- include: reset_excluder.yml + tags: + - always diff --git a/playbooks/common/openshift-cluster/initialize_openshift_version.yml b/playbooks/common/openshift-cluster/initialize_openshift_version.yml index 7f37c606f..f8981c040 100644 --- a/playbooks/common/openshift-cluster/initialize_openshift_version.yml +++ b/playbooks/common/openshift-cluster/initialize_openshift_version.yml @@ -18,12 +18,14 @@ msg: Incompatible versions of yum and subscription-manager found. You may need to update yum and yum-utils. when: "not openshift.common.is_atomic | bool and 'Plugin \"search-disabled-repos\" requires API 2.7. Supported API is 2.6.' in yum_ver_test.stdout" +# TODO(jchaloup): find a different way how to make repoquery --qf '%version` atomic-openshift work without disabling the excluders - include: disable_excluder.yml vars: # the excluders needs to be disabled no matter what status says with_status_check: false tags: - always + when: openshift_upgrade_target is not defined - name: Determine openshift_version to configure on first master hosts: oo_first_master @@ -39,3 +41,9 @@ openshift_version: "{{ hostvars[groups.oo_first_master.0].openshift_version }}" roles: - openshift_version + + # Re-enable excluders if they are meant to be enabled (and only during installation, upgrade disables the excluders before this play) +- include: reset_excluder.yml + tags: + - always + when: openshift_upgrade_target is not defined diff --git a/playbooks/common/openshift-cluster/upgrades/v3_5/validator.yml b/playbooks/common/openshift-cluster/upgrades/v3_5/validator.yml index 9c126033c..fb79f898f 100644 --- a/playbooks/common/openshift-cluster/upgrades/v3_5/validator.yml +++ b/playbooks/common/openshift-cluster/upgrades/v3_5/validator.yml @@ -35,7 +35,7 @@ kind: petsets register: l_do_petsets_exist - - name: FAIL ON Resource migration 'PetSets' unsupported + - name: Fail on unsupported resource migration 'PetSets' fail: msg: > PetSet objects were detected in your cluster. These are an @@ -62,6 +62,6 @@ $ oc get petsets --all-namespaces -o yaml | oc delete -f - --cascale=false when: # Search did not fail, valid resource type found - - l_do_petsets_exist.results.returncode == "0" + - l_do_petsets_exist.results.returncode == 0 # Items do exist in the search results - l_do_petsets_exist.results.results.0['items'] | length > 0 diff --git a/playbooks/common/openshift-master/config.yml b/playbooks/common/openshift-master/config.yml index 7a334e771..68b9db03a 100644 --- a/playbooks/common/openshift-master/config.yml +++ b/playbooks/common/openshift-master/config.yml @@ -127,6 +127,8 @@ etcd_cert_subdir: "openshift-master-{{ openshift.common.hostname }}" etcd_cert_config_dir: "{{ openshift.common.config_base }}/master" etcd_cert_prefix: "master.etcd-" + - role: nuage_master + when: openshift.common.use_nuage | bool post_tasks: - name: Create group for deployment type diff --git a/roles/openshift_logging/filter_plugins/openshift_logging.py b/roles/openshift_logging/filter_plugins/openshift_logging.py index 9beffaef7..44b0b2d48 100644 --- a/roles/openshift_logging/filter_plugins/openshift_logging.py +++ b/roles/openshift_logging/filter_plugins/openshift_logging.py @@ -5,6 +5,18 @@ import random +def es_storage(os_logging_facts, dc_name, pvc_claim, root='elasticsearch'): + '''Return a hash with the desired storage for the given ES instance''' + deploy_config = os_logging_facts[root]['deploymentconfigs'].get(dc_name) + if deploy_config: + storage = deploy_config['volumes']['elasticsearch-storage'] + if storage.get('hostPath'): + return dict(kind='hostpath', path=storage.get('hostPath').get('path')) + if len(pvc_claim.strip()) > 0: + return dict(kind='pvc', pvc_claim=pvc_claim) + return dict(kind='emptydir') + + def random_word(source_alpha, length): ''' Returns a random word given the source of characters to pick from and resulting length ''' return ''.join(random.choice(source_alpha) for i in range(length)) @@ -44,4 +56,5 @@ class FilterModule(object): 'random_word': random_word, 'entry_from_named_pair': entry_from_named_pair, 'map_from_pairs': map_from_pairs, + 'es_storage': es_storage } diff --git a/roles/openshift_logging/tasks/install_elasticsearch.yaml b/roles/openshift_logging/tasks/install_elasticsearch.yaml index a0ad12d94..086f9e33f 100644 --- a/roles/openshift_logging/tasks/install_elasticsearch.yaml +++ b/roles/openshift_logging/tasks/install_elasticsearch.yaml @@ -2,6 +2,8 @@ - name: Getting current ES deployment size set_fact: openshift_logging_current_es_size={{ openshift_logging_facts.elasticsearch.deploymentconfigs.keys() | length }} +- set_fact: es_pvc_pool={{[]}} + - name: Generate PersistentVolumeClaims include: "{{ role_path}}/tasks/generate_pvcs.yaml" vars: @@ -42,10 +44,10 @@ es_cluster_name: "{{component}}" es_cpu_limit: "{{openshift_logging_es_cpu_limit }}" es_memory_limit: "{{openshift_logging_es_memory_limit}}" - volume_names: "{{es_pvc_pool | default([])}}" - pvc_claim: "{{(volume_names | length > item.0) | ternary(volume_names[item.0], None)}}" + pvc_claim: "{{(es_pvc_pool | length > item.0) | ternary(es_pvc_pool[item.0], None)}}" deploy_name: "{{item.1}}" es_node_selector: "{{openshift_logging_es_nodeselector | default({}) }}" + es_storage: "{{openshift_logging_facts|es_storage(deploy_name, pvc_claim)}}" with_indexed_items: - "{{ es_dc_pool }}" check_mode: no @@ -111,8 +113,7 @@ logging_component: elasticsearch deploy_name_prefix: "logging-{{component}}" image: "{{openshift_logging_image_prefix}}logging-elasticsearch:{{openshift_logging_image_version}}" - volume_names: "{{es_pvc_pool | default([])}}" - pvc_claim: "{{(volume_names | length > item.0) | ternary(volume_names[item.0], None)}}" + pvc_claim: "{{(es_pvc_pool | length > item.0) | ternary(es_pvc_pool[item.0], None)}}" deploy_name: "{{item.1}}" es_cluster_name: "{{component}}" es_cpu_limit: "{{openshift_logging_es_ops_cpu_limit }}" @@ -121,7 +122,8 @@ es_recover_after_nodes: "{{es_ops_recover_after_nodes}}" es_recover_expected_nodes: "{{es_ops_recover_expected_nodes}}" openshift_logging_es_recover_after_time: "{{openshift_logging_es_ops_recover_after_time}}" - es_node_selector: "{{openshift_logging_es_ops_nodeselector | default({}) | map_from_pairs }}" + es_node_selector: "{{openshift_logging_es_ops_nodeselector | default({}) }}" + es_storage: "{{openshift_logging_facts|es_storage(deploy_name, pvc_claim,root='elasticsearch_ops')}}" with_indexed_items: - "{{ es_ops_dc_pool | default([]) }}" when: diff --git a/roles/openshift_logging/templates/es-storage-emptydir.partial b/roles/openshift_logging/templates/es-storage-emptydir.partial new file mode 100644 index 000000000..ccd01a816 --- /dev/null +++ b/roles/openshift_logging/templates/es-storage-emptydir.partial @@ -0,0 +1 @@ + emptyDir: {} diff --git a/roles/openshift_logging/templates/es-storage-hostpath.partial b/roles/openshift_logging/templates/es-storage-hostpath.partial new file mode 100644 index 000000000..07ddad9ba --- /dev/null +++ b/roles/openshift_logging/templates/es-storage-hostpath.partial @@ -0,0 +1,2 @@ + hostPath: + path: {{es_storage['path']}} diff --git a/roles/openshift_logging/templates/es-storage-pvc.partial b/roles/openshift_logging/templates/es-storage-pvc.partial new file mode 100644 index 000000000..fcbff68de --- /dev/null +++ b/roles/openshift_logging/templates/es-storage-pvc.partial @@ -0,0 +1,2 @@ + persistentVolumeClaim: + claimName: {{es_storage['pvc_claim']}} diff --git a/roles/openshift_logging/templates/es.j2 b/roles/openshift_logging/templates/es.j2 index 81ae070be..16185fc1d 100644 --- a/roles/openshift_logging/templates/es.j2 +++ b/roles/openshift_logging/templates/es.j2 @@ -103,9 +103,4 @@ spec: configMap: name: logging-elasticsearch - name: elasticsearch-storage -{% if pvc_claim is defined and pvc_claim | trim | length > 0 %} - persistentVolumeClaim: - claimName: {{pvc_claim}} -{% else %} - emptyDir: {} -{% endif %} +{% include 'es-storage-'+ es_storage['kind'] + '.partial' %} diff --git a/roles/openshift_master/meta/main.yml b/roles/openshift_master/meta/main.yml index af3e7eeec..18e1b3a54 100644 --- a/roles/openshift_master/meta/main.yml +++ b/roles/openshift_master/meta/main.yml @@ -40,8 +40,6 @@ dependencies: port: 4001/tcp when: groups.oo_etcd_to_config | default([]) | length == 0 - role: nickhammond.logrotate -- role: nuage_master - when: openshift.common.use_nuage | bool - role: contiv contiv_role: netmaster when: openshift.common.use_contiv | bool |