diff options
Diffstat (limited to 'docs')
-rw-r--r-- | docs/proposals/README.md | 27 | ||||
-rw-r--r-- | docs/proposals/playbook_consolidation.md | 178 | ||||
-rw-r--r-- | docs/proposals/proposal_template.md | 30 | ||||
-rw-r--r-- | docs/proposals/role_decomposition.md | 353 | ||||
-rw-r--r-- | docs/pull_requests.md | 8 | ||||
-rw-r--r-- | docs/repo_structure.md | 19 |
6 files changed, 605 insertions, 10 deletions
diff --git a/docs/proposals/README.md b/docs/proposals/README.md new file mode 100644 index 000000000..89bbe5163 --- /dev/null +++ b/docs/proposals/README.md @@ -0,0 +1,27 @@ +# OpenShift-Ansible Proposal Process + +## Proposal Decision Tree +TODO: Add details about when a proposal is or is not required. + +## Proposal Process +The following process should be followed when a proposal is needed: + +1. Create a pull request with the initial proposal + * Use the [proposal template][template] + * Name the proposal using two or three topic words with underscores as a separator (i.e. proposal_template.md) + * Place the proposal in the docs/proposals directory +2. Notify the development team of the proposal and request feedback +3. Review the proposal on the OpenShift-Ansible Architecture Meeting +4. Update the proposal as needed and ask for feedback +5. Approved/Closed Phase + * If 75% or more of the active development team give the proposal a :+1: it is Approved + * If 50% or more of the active development team disagrees with the proposal it is Closed + * If the person proposing the proposal no longer wishes to continue they can request it to be Closed + * If there is no activity on a proposal, the active development team may Close the proposal at their discretion + * If none of the above is met the cycle can continue to Step 4. +6. For approved proposals, the current development lead(s) will: + * Update the Pull Request with the result and merge the proposal + * Create a card on the Cluster Lifecycle [Trello board][trello] so it may be scheduled for implementation. + +[template]: proposal_template.md +[trello]: https://trello.com/b/wJYDst6C diff --git a/docs/proposals/playbook_consolidation.md b/docs/proposals/playbook_consolidation.md new file mode 100644 index 000000000..98aedb021 --- /dev/null +++ b/docs/proposals/playbook_consolidation.md @@ -0,0 +1,178 @@ +# OpenShift-Ansible Playbook Consolidation + +## Description +The designation of `byo` is no longer applicable due to being able to deploy on +physical hardware or cloud resources using the playbooks in the `byo` directory. +Consolidation of these directories will make maintaining the code base easier +and provide a more straightforward project for users and developers. + +The main points of this proposal are: +* Consolidate initialization playbooks into one set of playbooks in + `playbooks/init`. +* Collapse the `playbooks/byo` and `playbooks/common` into one set of + directories at `playbooks/openshift-*`. + +This consolidation effort may be more appropriate when the project moves to +using a container as the default installation method. + +## Design + +### Initialization Playbook Consolidation +Currently there are two separate sets of initialization playbooks: +* `playbooks/byo/openshift-cluster/initialize_groups.yml` +* `playbooks/common/openshift-cluster/std_include.yml` + +Although these playbooks are located in the `openshift-cluster` directory they +are shared by all of the `openshift-*` areas. These playbooks would be better +organized in a `playbooks/init` directory collocated with all their related +playbooks. + +In the example below, the following changes have been made: +* `playbooks/byo/openshift-cluster/initialize_groups.yml` renamed to + `playbooks/init/initialize_host_groups.yml` +* `playbooks/common/openshift-cluster/std_include.yml` renamed to + `playbooks/init/main.yml` +* `- include: playbooks/init/initialize_host_groups.yml` has been added to the + top of `playbooks/init/main.yml` +* All other related files for initialization have been moved to `playbooks/init` + +The `initialize_host_groups.yml` playbook is only one play with one task for +importing variables for inventory group conversions. This task could be further +consolidated with the play in `evaluate_groups.yml`. + +The new standard initialization playbook would be +`playbooks/init/main.yml`. + + +``` + +> $ tree openshift-ansible/playbooks/init +. +├── evaluate_groups.yml +├── initialize_facts.yml +├── initialize_host_groups.yml +├── initialize_openshift_repos.yml +├── initialize_openshift_version.yml +├── main.yml +├── roles -> ../../roles +├── validate_hostnames.yml +└── vars + └── cluster_hosts.yml +``` + +```yaml +# openshift-ansible/playbooks/init/main.yml +--- +- include: initialize_host_groups.yml + +- include: evaluate_groups.yml + +- include: initialize_facts.yml + +- include: validate_hostnames.yml + +- include: initialize_openshift_repos.yml + +- include: initialize_openshift_version.yml +``` + +### `byo` and `common` Playbook Consolidation +Historically, the `byo` directory coexisted with other platform directories +which contained playbooks that then called into `common` playbooks to perform +common installation steps for all platforms. Since the other platform +directories have been removed this separation is no longer necessary. + +In the example below, the following changes have been made: +* `playbooks/byo/openshift-master` renamed to + `playbooks/openshift-master` +* `playbooks/common/openshift-master` renamed to + `playbooks/openshift-master/private` +* Original `byo` entry point playbooks have been updated to include their + respective playbooks from `private/`. +* Symbolic links have been updated as necessary + +All user consumable playbooks are in the root of `openshift-master` and no entry +point playbooks exist in the `private` directory. Maintaining the separation +between entry point playbooks and the private playbooks allows individual pieces +of the deployments to be used as needed by other components. + +``` +openshift-ansible/playbooks/openshift-master +> $ tree +. +├── config.yml +├── private +│ ├── additional_config.yml +│ ├── config.yml +│ ├── filter_plugins -> ../../../filter_plugins +│ ├── library -> ../../../library +│ ├── lookup_plugins -> ../../../lookup_plugins +│ ├── restart_hosts.yml +│ ├── restart_services.yml +│ ├── restart.yml +│ ├── roles -> ../../../roles +│ ├── scaleup.yml +│ └── validate_restart.yml +├── restart.yml +└── scaleup.yml +``` + +```yaml +# openshift-ansible/playbooks/openshift-master/config.yml +--- +- include: ../init/main.yml + +- include: private/config.yml +``` + +With the consolidation of the directory structure and component installs being +removed from `openshift-cluster`, that directory is no longer necessary. To +deploy an entire OpenShift cluster, a playbook would be created to tie together +all of the different components. The following example shows how multiple +components would be combined to perform a complete install. + +```yaml +# openshift-ansible/playbooks/deploy_cluster.yml +--- +- include: init/main.yml + +- include: openshift-etcd/private/config.yml + +- include: openshift-nfs/private/config.yml + +- include: openshift-loadbalancer/private/config.yml + +- include: openshift-master/private/config.yml + +- include: openshift-node/private/config.yml + +- include: openshift-glusterfs/private/config.yml + +- include: openshift-hosted/private/config.yml + +- include: openshift-service-catalog/private/config.yml +``` + +## User Story +As a developer of OpenShift-Ansible, +I want simplify the playbook directory structure +so that users can easily find deployment playbooks and developers know where new +features should be developed. + +## Implementation +Given the size of this refactoring effort, it should be broken into smaller +steps which can be completed independently while still maintaining a functional +project. + +Steps: +1. Update and merge consolidation of the initialization playbooks. +2. Update each merge consolidation of each `openshift-*` component area +3. Update and merge consolidation of `openshift-cluster` + +## Acceptance Criteria +* Verify that all entry points playbooks install or configure as expected. +* Verify that CI is updated for testing new playbook locations. +* Verify that repo documentation is updated +* Verify that user documentation is updated + +## References diff --git a/docs/proposals/proposal_template.md b/docs/proposals/proposal_template.md new file mode 100644 index 000000000..ece288037 --- /dev/null +++ b/docs/proposals/proposal_template.md @@ -0,0 +1,30 @@ +# Proposal Title + +## Description +<Short introduction> + +## Rationale +<Summary of main points of Design> + +## Design +<Main content goes here> + +## Checklist +* Item 1 +* Item 2 +* Item 3 + +## User Story +As a developer on OpenShift-Ansible, +I want ... +so that ... + +## Acceptance Criteria +* Verify that ... +* Verify that ... +* Verify that ... + +## References +* Link +* Link +* Link diff --git a/docs/proposals/role_decomposition.md b/docs/proposals/role_decomposition.md new file mode 100644 index 000000000..6434e24e7 --- /dev/null +++ b/docs/proposals/role_decomposition.md @@ -0,0 +1,353 @@ +# Scaffolding for decomposing large roles + +## Why? + +Currently we have roles that are very large and encompass a lot of different +components. This makes for a lot of logic required within the role, can +create complex conditionals, and increases the learning curve for the role. + +## How? + +Creating a guide on how to approach breaking up a large role into smaller, +component based, roles. Also describe how to develop new roles, to avoid creating +large roles. + +## Proposal + +Create a new guide or append to the current contributing guide a process for +identifying large roles that can be split up, and how to compose smaller roles +going forward. + +### Large roles + +A role should be considered for decomposition if it: + +1) Configures/installs more than one product. +1) Can configure multiple variations of the same product that can live +side by side. +1) Has different entry points for upgrading and installing a product + +Large roles<sup>1</sup> should be responsible for: +> 1 or composing playbooks + +1) Composing smaller roles to provide a full solution such as an Openshift Master +1) Ensuring that smaller roles are called in the correct order if necessary +1) Calling smaller roles with their required variables +1) Performing prerequisite tasks that small roles may depend on being in place +(openshift_logging certificate generation for example) + +### Small roles + +A small role should be able to: + +1) Be deployed independently of other products (this is different than requiring +being installed after other base components such as OCP) +1) Be self contained and able to determine facts that it requires to complete +1) Fail fast when facts it requires are not available or are invalid +1) "Make it so" based on provided variables and anything that may be required +as part of doing such (this should include data migrations) +1) Have a minimal set of dependencies in meta/main.yml, just enough to do its job + +### Example using decomposition of openshift_logging + +The `openshift_logging` role was created as a port from the deployer image for +the `3.5` deliverable. It was a large role that created the service accounts, +configmaps, secrets, routes, and deployment configs/daemonset required for each +of its different components (Fluentd, Kibana, Curator, Elasticsearch). + +It was possible to configure any of the components independently of one another, +up to a point. However, it was an all of nothing installation and there was a +need from customers to be able to do things like just deploy Fluentd. + +Also being able to support multiple versions of configuration files would become +increasingly messy with a large role. Especially if the components had changes +at different intervals. + +#### Folding of responsibility + +There was a duplicate of work within the installation of three of the four logging +components where there was a possibility to deploy both an 'operations' and +'non-operations' cluster side-by-side. The first step was to collapse that +duplicate work into a single path and allow a variable to be provided to +configure such that either possibility could be created. + +#### Consolidation of responsibility + +The generation of OCP objects required for each component were being created in +the same task file, all Service Accounts were created at the same time, all secrets, +configmaps, etc. The only components that were not generated at the same time were +the deployment configs and the daemonset. The second step was to make the small +roles self contained and generate their own required objects. + +#### Consideration for prerequisites + +Currently the Aggregated Logging stack generates its own certificates as it has +some requirements that prevent it from utilizing the OCP cert generation service. +In order to make sure that all components were able to trust one another as they +did previously, until the cert generation service can be used, the certificate +generation is being handled within the top level `openshift_logging` role and +providing the location of the generated certificates to the individual roles. + +#### Snippets + +[openshift_logging/tasks/install_logging.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging/tasks/install_logging.yaml) +```yaml +- name: Gather OpenShift Logging Facts + openshift_logging_facts: + oc_bin: "{{openshift.common.client_binary}}" + openshift_logging_namespace: "{{openshift_logging_namespace}}" + +- name: Set logging project + oc_project: + state: present + name: "{{ openshift_logging_namespace }}" + +- name: Create logging cert directory + file: + path: "{{ openshift.common.config_base }}/logging" + state: directory + mode: 0755 + changed_when: False + check_mode: no + +- include: generate_certs.yaml + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +## Elasticsearch +- include_role: + name: openshift_logging_elasticsearch + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +- include_role: + name: openshift_logging_elasticsearch + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_es_ops_deployment: true + when: + - openshift_logging_use_ops | bool + + +## Kibana +- include_role: + name: openshift_logging_kibana + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_kibana_namespace: "{{ openshift_logging_namespace }}" + openshift_logging_kibana_master_url: "{{ openshift_logging_master_url }}" + openshift_logging_kibana_master_public_url: "{{ openshift_logging_master_public_url }}" + openshift_logging_kibana_image_prefix: "{{ openshift_logging_image_prefix }}" + openshift_logging_kibana_image_version: "{{ openshift_logging_image_version }}" + openshift_logging_kibana_replicas: "{{ openshift_logging_kibana_replica_count }}" + openshift_logging_kibana_es_host: "{{ openshift_logging_es_host }}" + openshift_logging_kibana_es_port: "{{ openshift_logging_es_port }}" + openshift_logging_kibana_image_pull_secret: "{{ openshift_logging_image_pull_secret }}" + +- include_role: + name: openshift_logging_kibana + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_kibana_ops_deployment: true + openshift_logging_kibana_namespace: "{{ openshift_logging_namespace }}" + openshift_logging_kibana_master_url: "{{ openshift_logging_master_url }}" + openshift_logging_kibana_master_public_url: "{{ openshift_logging_master_public_url }}" + openshift_logging_kibana_image_prefix: "{{ openshift_logging_image_prefix }}" + openshift_logging_kibana_image_version: "{{ openshift_logging_image_version }}" + openshift_logging_kibana_image_pull_secret: "{{ openshift_logging_image_pull_secret }}" + openshift_logging_kibana_es_host: "{{ openshift_logging_es_ops_host }}" + openshift_logging_kibana_es_port: "{{ openshift_logging_es_ops_port }}" + openshift_logging_kibana_nodeselector: "{{ openshift_logging_kibana_ops_nodeselector }}" + openshift_logging_kibana_memory_limit: "{{ openshift_logging_kibana_ops_memory_limit }}" + openshift_logging_kibana_cpu_request: "{{ openshift_logging_kibana_ops_cpu_request }}" + openshift_logging_kibana_hostname: "{{ openshift_logging_kibana_ops_hostname }}" + openshift_logging_kibana_replicas: "{{ openshift_logging_kibana_ops_replica_count }}" + openshift_logging_kibana_proxy_debug: "{{ openshift_logging_kibana_ops_proxy_debug }}" + openshift_logging_kibana_proxy_memory_limit: "{{ openshift_logging_kibana_ops_proxy_memory_limit }}" + openshift_logging_kibana_proxy_cpu_request: "{{ openshift_logging_kibana_ops_proxy_cpu_request }}" + openshift_logging_kibana_cert: "{{ openshift_logging_kibana_ops_cert }}" + openshift_logging_kibana_key: "{{ openshift_logging_kibana_ops_key }}" + openshift_logging_kibana_ca: "{{ openshift_logging_kibana_ops_ca}}" + when: + - openshift_logging_use_ops | bool + + +## Curator +- include_role: + name: openshift_logging_curator + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_curator_namespace: "{{ openshift_logging_namespace }}" + openshift_logging_curator_master_url: "{{ openshift_logging_master_url }}" + openshift_logging_curator_image_prefix: "{{ openshift_logging_image_prefix }}" + openshift_logging_curator_image_version: "{{ openshift_logging_image_version }}" + openshift_logging_curator_image_pull_secret: "{{ openshift_logging_image_pull_secret }}" + +- include_role: + name: openshift_logging_curator + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + openshift_logging_curator_ops_deployment: true + openshift_logging_curator_namespace: "{{ openshift_logging_namespace }}" + openshift_logging_curator_master_url: "{{ openshift_logging_master_url }}" + openshift_logging_curator_image_prefix: "{{ openshift_logging_image_prefix }}" + openshift_logging_curator_image_version: "{{ openshift_logging_image_version }}" + openshift_logging_curator_image_pull_secret: "{{ openshift_logging_image_pull_secret }}" + openshift_logging_curator_memory_limit: "{{ openshift_logging_curator_ops_memory_limit }}" + openshift_logging_curator_cpu_request: "{{ openshift_logging_curator_ops_cpu_request }}" + openshift_logging_curator_nodeselector: "{{ openshift_logging_curator_ops_nodeselector }}" + when: + - openshift_logging_use_ops | bool + + +## Fluentd +- include_role: + name: openshift_logging_fluentd + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +- include: update_master_config.yaml +``` + +[openshift_logging_elasticsearch/meta/main.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging_elasticsearch/meta/main.yaml) +```yaml +--- +galaxy_info: + author: OpenShift Red Hat + description: OpenShift Aggregated Logging Elasticsearch Component + company: Red Hat, Inc. + license: Apache License, Version 2.0 + min_ansible_version: 2.2 + platforms: + - name: EL + versions: + - 7 + categories: + - cloud +dependencies: +- role: lib_openshift +``` + +[openshift_logging/meta/main.yaml](https://github.com/ewolinetz/openshift-ansible/blob/logging_component_subroles/roles/openshift_logging/meta/main.yaml) +```yaml +--- +galaxy_info: + author: OpenShift Red Hat + description: OpenShift Aggregated Logging + company: Red Hat, Inc. + license: Apache License, Version 2.0 + min_ansible_version: 2.2 + platforms: + - name: EL + versions: + - 7 + categories: + - cloud +dependencies: +- role: lib_openshift +- role: openshift_facts +``` + +[openshift_logging/tasks/install_support.yaml - old](https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_logging/tasks/install_support.yaml) +```yaml +--- +# This is the base configuration for installing the other components +- name: Check for logging project already exists + command: > + {{ openshift.common.client_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig get project {{openshift_logging_namespace}} --no-headers + register: logging_project_result + ignore_errors: yes + when: not ansible_check_mode + changed_when: no + +- name: "Create logging project" + command: > + {{ openshift.common.admin_binary }} --config={{ mktemp.stdout }}/admin.kubeconfig new-project {{openshift_logging_namespace}} + when: not ansible_check_mode and "not found" in logging_project_result.stderr + +- name: Create logging cert directory + file: path={{openshift.common.config_base}}/logging state=directory mode=0755 + changed_when: False + check_mode: no + +- include: generate_certs.yaml + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +- name: Create temp directory for all our templates + file: path={{mktemp.stdout}}/templates state=directory mode=0755 + changed_when: False + check_mode: no + +- include: generate_secrets.yaml + vars: + generated_certs_dir: "{{openshift.common.config_base}}/logging" + +- include: generate_configmaps.yaml + +- include: generate_services.yaml + +- name: Generate kibana-proxy oauth client + template: src=oauth-client.j2 dest={{mktemp.stdout}}/templates/oauth-client.yaml + vars: + secret: "{{oauth_secret}}" + when: oauth_secret is defined + check_mode: no + changed_when: no + +- include: generate_clusterroles.yaml + +- include: generate_rolebindings.yaml + +- include: generate_clusterrolebindings.yaml + +- include: generate_serviceaccounts.yaml + +- include: generate_routes.yaml +``` + +# Limitations + +There will always be exceptions for some of these rules, however the majority of +roles should be able to fall within these guidelines. + +# Additional considerations + +## Playbooks including playbooks +In some circumstances it does not make sense to have a composing role but instead +a playbook would be best for orchestrating the role flow. Decisions made regarding +playbooks including playbooks will need to be taken into consideration as part of +defining this process. +Ref: (link to rteague's presentation?) + +## Role dependencies +We want to make sure that our roles do not have any extra or unnecessary dependencies +in meta/main.yml without: + +1. Proposing the inclusion in a team meeting or as part of the PR review and getting agreement +1. Documenting in meta/main.yml why it is there and when it was agreed to (date) + +## Avoiding overly verbose roles +When we are splitting our roles up into smaller components we want to ensure we +avoid creating roles that are, for a lack of a better term, overly verbose. What +do we mean by that? If we have `openshift_master` as an example, and we were to +split it up, we would have a component for `etcd`, `docker`, and possibly for +its rpms/configs. We would want to avoid creating a role that would just create +certificates as those would make sense to be contained with the rpms and configs. +Likewise, when it comes to being able to restart the master, we wouldn't have a +role where that was its sole purpose. + +The same would apply for the `etcd` and `docker` roles. Anything that is required +as part of installing `etcd` such as generating certificates, installing rpms, +and upgrading data between versions should all be contained within the single +`etcd` role. + +## Enforcing standards +Certain naming standards like variable names could be verified as part of a Travis +test. If we were going to also enforce that a role either has tasks or includes +(for example) then we could create tests for that as well. + +## CI tests for individual roles +If we are able to correctly split up roles, it should be possible to test role +installations/upgrades like unit tests (assuming they would be able to be installed +independently of other components). diff --git a/docs/pull_requests.md b/docs/pull_requests.md index fcc3e275c..45ae01a9d 100644 --- a/docs/pull_requests.md +++ b/docs/pull_requests.md @@ -10,8 +10,8 @@ Whenever a [Pull Request is opened](../CONTRIBUTING.md#submitting-contributions), some automated test jobs must be successfully run before the PR can be merged. -Some of these jobs are automatically triggered, e.g., Travis and Coveralls. -Other jobs need to be manually triggered by a member of the +Some of these jobs are automatically triggered, e.g., Travis, PAPR, and +Coveralls. Other jobs need to be manually triggered by a member of the [Team OpenShift Ansible Contributors](https://github.com/orgs/openshift/teams/team-openshift-ansible-contributors). ## Triggering tests @@ -48,9 +48,9 @@ simplifying the workflow towards a single infrastructure in the future. There are a set of tests that run on Fedora infrastructure. They are started automatically with every pull request. -They are implemented using the [`redhat-ci` framework](https://github.com/jlebon/redhat-ci). +They are implemented using the [`PAPR` framework](https://github.com/projectatomic/papr). -To re-run tests, write a comment containing `bot, retest this please`. +To re-run tests, write a comment containing only `bot, retest this please`. ## Triggering merge diff --git a/docs/repo_structure.md b/docs/repo_structure.md index 693837fba..49300f80c 100644 --- a/docs/repo_structure.md +++ b/docs/repo_structure.md @@ -28,12 +28,6 @@ These are plugins used in playbooks and roles: ``` . -├── bin [DEPRECATED] Contains the `bin/cluster` script, a -│ wrapper around the Ansible playbooks that ensures proper -│ configuration, and facilitates installing, updating, -│ destroying and configuring OpenShift clusters. -│ Note: this tool is kept in the repository for legacy -│ reasons and will be removed at some point. └── utils Contains the `atomic-openshift-installer` command, an interactive CLI utility to install OpenShift across a set of hosts. @@ -52,3 +46,16 @@ These are plugins used in playbooks and roles: . └── test Contains tests. ``` + +### CI + +These files are used by [PAPR](https://github.com/projectatomic/papr), +It is very similar in workflow to Travis, with the test +environment and test scripts defined in a YAML file. + +``` +. +├── .papr.yml +├── .papr.sh +└── .papr.inventory +``` |