diff options
| author | Scott Dodson <sdodson@redhat.com> | 2017-11-16 08:53:17 -0500 | 
|---|---|---|
| committer | GitHub <noreply@github.com> | 2017-11-16 08:53:17 -0500 | 
| commit | 8a85bc3a00efb654238a58e1faa2a2629d9360b1 (patch) | |
| tree | 693186d957899c6e3cd38a6ae5aa66392814ff15 | |
| parent | 7dcc6292e2ca89fe22675491299cf5853860bed8 (diff) | |
| parent | 2e9d134d4564d87dbbc7853b07204f7f44ee01e6 (diff) | |
Merge pull request #6039 from tomassedovic/openstack-provider-githist
Add the OpenStack provider
32 files changed, 3213 insertions, 0 deletions
diff --git a/ansible.cfg b/ansible.cfg index 0ce24607e..9900d28f8 100644 --- a/ansible.cfg +++ b/ansible.cfg @@ -26,6 +26,9 @@ fact_caching = jsonfile  fact_caching_connection = $HOME/ansible/facts  fact_caching_timeout = 600  callback_whitelist = profile_tasks +inventory_ignore_extensions = secrets.py, .pyc, .cfg, .crt +# work around privilege escalation timeouts in ansible: +timeout = 30  # Uncomment to use the provided BYO inventory  #inventory = inventory/byo/hosts.example diff --git a/playbooks/openstack/README.md b/playbooks/openstack/README.md new file mode 100644 index 000000000..f3fe13530 --- /dev/null +++ b/playbooks/openstack/README.md @@ -0,0 +1,262 @@ +# OpenStack Provisioning + +This directory contains [Ansible][ansible] playbooks and roles to create +OpenStack resources (servers, networking, volumes, security groups, +etc.). The result is an environment ready for OpenShift installation +via [openshift-ansible]. + +We provide everything necessary to be able to install OpenShift on +OpenStack (including the DNS and load balancer servers when +necessary). In addition, we work on providing integration with the +OpenStack-native services (storage, lbaas, baremetal as a service, +dns, etc.). + + +## OpenStack Requirements + +Before you start the installation, you need to have an OpenStack +environment to connect to. You can use a public cloud or an OpenStack +within your organisation. It is also possible to +use [Devstack][devstack] or [TripleO][tripleo]. In the case of +TripleO, we will be running on top of the **overcloud**. + +The OpenStack release must be Newton (for Red Hat OpenStack this is +version 10) or newer. It must also satisfy these requirements: + +* Heat (Orchestration) must be available +* The deployment image (CentOS 7 or RHEL 7) must be loaded +* The deployment flavor must be available to your user +  - `m1.medium` / 4GB RAM + 40GB disk should be enough for testing +  - look at +    the [Minimum Hardware Requirements page][hardware-requirements] +    for production +* The keypair for SSH must be available in openstack +* `keystonerc` file that lets you talk to the openstack services +   * NOTE: only Keystone V2 is currently supported + +Optional: +* External Neutron network with a floating IP address pool + + +## DNS Requirements + +OpenShift requires DNS to operate properly. OpenStack supports DNS-as-a-service +in the form of the Designate project, but the playbooks here don't support it +yet. Until we do, you will need to provide a DNS solution yourself (or in case +you are not running Designate when we do). + +If your server supports nsupdate, we will use it to add the necessary records. + +TODO(shadower): describe how to build a sample DNS server and how to configure +our playbooks for nsupdate. + + +## Installation + +There are four main parts to the installation: + +1. [Preparing Ansible and dependencies](#1-preparing-ansible-and-dependencies) +2. [Configuring the desired OpenStack environment and OpenShift cluster](#2-configuring-the-openstack-environment-and-openshift-cluster) +3. [Creating the OpenStack resources (VMs, networking, etc.)](#3-creating-the-openstack-resources-vms-networking-etc) +4. [Installing OpenShift](#4-installing-openshift) + +This guide is going to install [OpenShift Origin][origin] +with [CentOS 7][centos7] images with minimal customisation. + +We will create the VMs for running OpenShift, in a new Neutron +network, assign Floating IP addresses and configure DNS. + +The OpenShift cluster will have a single Master node that will run +`etcd`, a single Infra node and two App nodes. + +You can look at +the [Advanced Configuration page][advanced-configuration] for +additional options. + + + +### 1. Preparing Ansible and dependencies + +First, you need to select where to run [Ansible][ansible] from (the +*Ansible host*). This can be the computer you read this guide on or an +OpenStack VM you'll create specifically for this purpose. + +We will use +a +[Docker image that has all the dependencies installed][control-host-image] to +make things easier. If you don't want to use Docker, take a look at +the [Ansible host dependencies][ansible-dependencies] and make sure +they're installed. + +Your *Ansible host* needs to have the following: + +1. Docker +2. `keystonerc` file with your OpenStack credentials +3. SSH private key for logging in to your OpenShift nodes + +Assuming your private key is `~/.ssh/id_rsa` and `keystonerc` in your +current directory: + +```bash +$ sudo docker run -it -v ~/.ssh:/mnt/.ssh:Z \ +     -v $PWD/keystonerc:/root/.config/openstack/keystonerc.sh:Z \ +     redhatcop/control-host-openstack bash +``` + +This will create the container, add your SSH key and source your +`keystonerc`. It should be set up for the installation. + +You can verify that everything is in order: + + +```bash +$ less .ssh/id_rsa +$ ansible --version +$ openstack image list +``` + + +### 2. Configuring the OpenStack Environment and OpenShift Cluster + +The configuration is all done in an Ansible inventory directory. We +will clone the [openshift-ansible][openshift-ansible] repository and set +things up for a minimal installation. + + +``` +$ git clone https://github.com/openshift/openshift-ansible +$ cp -r openshift-ansible/playbooks/openstack/sample-inventory/ inventory +``` + +If you're testing multiple configurations, you can have multiple +inventories and switch between them. + +#### OpenStack Configuration + +The OpenStack configuration is in `inventory/group_vars/all.yml`. + +Open the file and plug in the image, flavor and network configuration +corresponding to your OpenStack installation. + +```bash +$ vi inventory/group_vars/all.yml +``` + +1. Set the `openshift_openstack_keypair_name` to your OpenStack keypair name. +   - See `openstack keypair list` to find the keypairs registered with +   OpenShift. +   - This must correspond to your private SSH key in `~/.ssh/id_rsa` +2. Set the `openshift_openstack_external_network_name` to the floating IP +   network of your openstack. +   - See `openstack network list` for the list of networks. +   - It's often called `public`, `external` or `ext-net`. +3. Set the `openshift_openstack_default_image_name` to the image you want your +   OpenShift VMs to run. +   - See `openstack image list` for the list of available images. +4. Set the `openshift_openstack_default_flavor` to the flavor you want your +   OpenShift VMs to use. +   - See `openstack flavor list` for the list of available flavors. +5. Set the `openshift_openstack_dns_nameservers` to the list of the IP addresses +   of the DNS servers used for the **private** address resolution. + +**NOTE ON DNS**: at minimum, the OpenShift nodes need to be able to access each +other by their hostname.  OpenStack doesn't provide this by default, so you +need to provide a DNS server. Put the address of that DNS server in +`openshift_openstack_dns_nameservers` variable. + + + + +#### OpenShift configuration + +The OpenShift configuration is in `inventory/group_vars/OSEv3.yml`. + +The default options will mostly work, but unless you used the large +flavors for a production-ready environment, openshift-ansible's +hardware check will fail. + +Let's disable those checks by putting this in +`inventory/group_vars/OSEv3.yml`: + +```yaml +openshift_disable_check: disk_availability,memory_availability +``` + +**NOTE**: The default authentication method will allow **any username +and password** in! If you're running this in a public place, you need +to set up access control. + +Feel free to look at +the [Sample OpenShift Inventory][sample-openshift-inventory] and +the [advanced configuration][advanced-configuration]. + + +### 3. Creating the OpenStack resources (VMs, networking, etc.) + +We provide an `ansible.cfg` file which has some useful defaults -- you should +copy it to the directory you're going to run `ansible-playbook` from. + +```bash +$ cp openshift-ansible/ansible.cfg ansible.cfg +``` + +Then run the provisioning playbook -- this will create the OpenStack +resources: + +```bash +$ ansible-playbook --user openshift -i inventory openshift-ansible/playbooks/openstack/openshift-cluster/provision.yaml +``` + +If you're using multiple inventories, make sure you pass the path to +the right one to `-i`. + +If your SSH private key is not in `~/.ssh/id_rsa` use the `--private-key` +option to specify the correct path. + + +### 4. Installing OpenShift + +Run the `byo/config.yml` playbook on top of the OpenStack nodes we have +prepared. + +```bash +$ ansible-playbook -i inventory openshift-ansible/playbooks/byo/config.yml +``` + + +### Next Steps + +And that's it! You should have a small but functional OpenShift +cluster now. + +Take a look at [how to access the cluster][accessing-openshift] +and [how to remove it][uninstall-openshift] as well as the more +advanced configuration: + +* [Accessing the OpenShift cluster][accessing-openshift] +* [Removing the OpenShift cluster][uninstall-openshift] +* Set Up Authentication (TODO) +* [Multiple Masters with a load balancer][loadbalancer] +* [External Dns][external-dns] +* Multiple Clusters (TODO) +* [Cinder Registry][cinder-registry] +* [Bastion Node][bastion] + + +[ansible]: https://www.ansible.com/ +[openshift-ansible]: https://github.com/openshift/openshift-ansible +[devstack]: https://docs.openstack.org/devstack/ +[tripleo]: http://tripleo.org/ +[ansible-dependencies]: ./advanced-configuration.md#dependencies-for-localhost-ansible-controladmin-node +[control-host-image]: https://hub.docker.com/r/redhatcop/control-host-openstack/ +[hardware-requirements]: https://docs.openshift.org/latest/install_config/install/prerequisites.html#hardware +[origin]: https://www.openshift.org/ +[centos7]: https://www.centos.org/ +[sample-openshift-inventory]: https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.example +[advanced-configuration]: ./advanced-configuration.md +[accessing-openshift]: ./advanced-configuration.md#accessing-the-openshift-cluster +[uninstall-openshift]: ./advanced-configuration.md#removing-the-openshift-cluster +[loadbalancer]: ./advanced-configuration.md#multi-master-configuration +[external-dns]: ./advanced-configuration.md#dns-configuration-variables +[cinder-registry]: ./advanced-configuration.md#creating-and-using-a-cinder-volume-for-the-openshift-registry +[bastion]: ./advanced-configuration.md#configure-static-inventory-and-access-via-a-bastion-node diff --git a/playbooks/openstack/advanced-configuration.md b/playbooks/openstack/advanced-configuration.md new file mode 100644 index 000000000..90cc20b98 --- /dev/null +++ b/playbooks/openstack/advanced-configuration.md @@ -0,0 +1,772 @@ +## Dependencies for localhost (ansible control/admin node) + +* [Ansible 2.3](https://pypi.python.org/pypi/ansible) +* [Ansible-galaxy](https://pypi.python.org/pypi/ansible-galaxy-local-deps) +* [jinja2](http://jinja.pocoo.org/docs/2.9/) +* [shade](https://pypi.python.org/pypi/shade) +* python-jmespath / [jmespath](https://pypi.python.org/pypi/jmespath) +* python-dns / [dnspython](https://pypi.python.org/pypi/dnspython) +* Become (sudo) is not required. + +**NOTE**: You can use a Docker image with all dependencies set up. +Find more in the [Deployment section](#deployment). + +### Optional Dependencies for localhost +**Note**: When using rhel images, `rhel-7-server-openstack-10-rpms` repository is required in order to install these packages. + +* `python-openstackclient` +* `python-heatclient` + +## Dependencies for OpenStack hosted cluster nodes (servers) + +There are no additional dependencies for the cluster nodes. Required +configuration steps are done by Heat given a specific user data config +that normally should not be changed. + +## Required galaxy modules + +In order to pull in external dependencies for DNS configuration steps, +the following commads need to be executed: + +    ansible-galaxy install \ +      -r openshift-ansible-contrib/playbooks/provisioning/openstack/galaxy-requirements.yaml \ +      -p openshift-ansible-contrib/roles + +Alternatively you can install directly from github: + +    ansible-galaxy install git+https://github.com/redhat-cop/infra-ansible,master \ +      -p openshift-ansible-contrib/roles + +Notes: +* This assumes we're in the directory that contains the clonned +openshift-ansible-contrib repo in its root path. +* When trying to install a different version, the previous one must be removed first +(`infra-ansible` directory from [roles](https://github.com/openshift/openshift-ansible-contrib/tree/master/roles)). +Otherwise, even if there are differences between the two versions, installation of the newer version is skipped. + + +## Accessing the OpenShift Cluster + +### Use the Cluster DNS + +In addition to the OpenShift nodes, we created a DNS server with all +the necessary entries. We will configure your *Ansible host* to use +this new DNS and talk to the deployed OpenShift. + +First, get the DNS IP address: + +```bash +$ openstack server show dns-0.openshift.example.com --format value --column addresses +openshift-ansible-openshift.example.com-net=192.168.99.11, 10.40.128.129 +``` + +Note the floating IP address (it's `10.40.128.129` in this case) -- if +you're not sure, try pinging them both -- it's the one that responds +to pings. + +Next, edit your `/etc/resolv.conf` as root and put `nameserver DNS_IP` as your +**first entry**. + +If your `/etc/resolv.conf` currently looks like this: + +``` +; generated by /usr/sbin/dhclient-script +search openstacklocal +nameserver 192.168.0.3 +nameserver 192.168.0.2 +``` + +Change it to this: + +``` +; generated by /usr/sbin/dhclient-script +search openstacklocal +nameserver 10.40.128.129 +nameserver 192.168.0.3 +nameserver 192.168.0.2 +``` + +### Get the `oc` Client + +**NOTE**: You can skip this section if you're using the Docker image +-- it already has the `oc` binary. + +You need to download the OpenShift command line client (called `oc`). +You can download and extract `openshift-origin-client-tools` from the +OpenShift release page: + +https://github.com/openshift/origin/releases/latest/ + +Or you can now copy it from the master node: + +    $ ansible -i inventory masters[0] -m fetch -a "src=/bin/oc dest=oc" + +Either way, find the `oc` binary and put it in your `PATH`. + + +### Logging in Using the Command Line + + +``` +oc login --insecure-skip-tls-verify=true https://master-0.openshift.example.com:8443 -u user -p password +oc new-project test +oc new-app --template=cakephp-mysql-example +oc status -v +curl http://cakephp-mysql-example-test.apps.openshift.example.com +``` + +This will trigger an image build. You can run `oc logs -f +bc/cakephp-mysql-example` to follow its progress. + +Wait until the build has finished and both pods are deployed and running: + +``` +$ oc status -v +In project test on server https://master-0.openshift.example.com:8443 + +http://cakephp-mysql-example-test.apps.openshift.example.com (svc/cakephp-mysql-example) +  dc/cakephp-mysql-example deploys istag/cakephp-mysql-example:latest <- +    bc/cakephp-mysql-example source builds https://github.com/openshift/cakephp-ex.git on openshift/php:7.0 +    deployment #1 deployed about a minute ago - 1 pod + +svc/mysql - 172.30.144.36:3306 +  dc/mysql deploys openshift/mysql:5.7 +    deployment #1 deployed 3 minutes ago - 1 pod + +Info: +  * pod/cakephp-mysql-example-1-build has no liveness probe to verify pods are still running. +    try: oc set probe pod/cakephp-mysql-example-1-build --liveness ... +View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'. + +``` + +You can now look at the deployed app using its route: + +``` +$ curl http://cakephp-mysql-example-test.apps.openshift.example.com +``` + +Its `title` should say: "Welcome to OpenShift". + + +### Accessing the UI + +You can also access the OpenShift cluster with a web browser by going to: + +https://master-0.openshift.example.com:8443 + +Note that for this to work, the OpenShift nodes must be accessible +from your computer and it's DNS configuration must use the cruster's +DNS. + + +## Removing the OpenShift Cluster + +Everything in the cluster is contained within a Heat stack. To +completely remove the cluster and all the related OpenStack resources, +run this command: + +```bash +openstack stack delete --wait --yes openshift.example.com +``` + + +## DNS configuration variables + +Pay special attention to the values in the first paragraph -- these +will depend on your OpenStack environment. + +Note that the provsisioning playbooks update the original Neutron subnet +created with the Heat stack to point to the configured DNS servers. +So the provisioned cluster nodes will start using those natively as +default nameservers. Technically, this allows to deploy OpenShift clusters +without dnsmasq proxies. + +The `openshift_openstack_clusterid` and `openshift_openstack_public_dns_domain` will form the cluster's DNS domain all +your servers will be under. With the default values, this will be +`openshift.example.com`. For workloads, the default subdomain is 'apps'. +That sudomain can be set as well by the `openshift_openstack_app_subdomain` variable in +the inventory. + +The `openstack_<role name>_hostname` is a set of variables used for customising +hostnames of servers with a given role. When such a variable stays commented, +default hostname (usually the role name) is used. + +The `openshift_openstack_dns_nameservers` is a list of DNS servers accessible from all +the created Nova servers. These will provide the internal name resolution for +your OpenShift nodes (as well as upstream name resolution for installing +packages, etc.). + +The `openshift_use_dnsmasq` controls either dnsmasq is deployed or not. +By default, dnsmasq is deployed and comes as the hosts' /etc/resolv.conf file +first nameserver entry that points to the local host instance of the dnsmasq +daemon that in turn proxies DNS requests to the authoritative DNS server. +When Network Manager is enabled for provisioned cluster nodes, which is +normally the case, you should not change the defaults and always deploy dnsmasq. + +`openshift_openstack_external_nsupdate_keys` describes an external authoritative DNS server(s) +processing dynamic records updates in the public and private cluster views: + +    openshift_openstack_external_nsupdate_keys: +      public: +        key_secret: <some nsupdate key> +        key_algorithm: 'hmac-md5' +        key_name: 'update-key' +        server: <public DNS server IP> +      private: +        key_secret: <some nsupdate key 2> +        key_algorithm: 'hmac-sha256' +        server: <public or private DNS server IP> + +Here, for the public view section, we specified another key algorithm and +optional `key_name`, which normally defaults to the cluster's DNS domain. +This just illustrates a compatibility mode with a DNS service deployed +by OpenShift on OSP10 reference architecture, and used in a mixed mode with +another external DNS server. + +Another example defines an external DNS server for the public view +additionally to the in-stack DNS server used for the private view only: + +    openshift_openstack_external_nsupdate_keys: +      public: +        key_secret: <some nsupdate key> +        key_algorithm: 'hmac-sha256' +        server: <public DNS server IP> + +Here, updates matching the public view will be hitting the given public +server IP. While updates matching the private view will be sent to the +auto evaluated in-stack DNS server's **public** IP. + +Note, for the in-stack DNS server, private view updates may be sent only +via the public IP of the server. You can not send updates via the private +IP yet. This forces the in-stack private server to have a floating IP. +See also the [security notes](#security-notes) + +## Flannel networking + +In order to configure the +[flannel networking](https://docs.openshift.com/container-platform/3.6/install_config/configuring_sdn.html#using-flannel), +uncomment and adjust the appropriate `inventory/group_vars/OSEv3.yml` group vars. +Note that the `osm_cluster_network_cidr` must not overlap with the default +Docker bridge subnet of 172.17.0.0/16. Or you should change the docker0 default +CIDR range otherwise. For example, by adding `--bip=192.168.2.1/24` to +`DOCKER_NETWORK_OPTIONS` located in `/etc/sysconfig/docker-network`. + +Also note that the flannel network will be provisioned on a separate isolated Neutron +subnet defined from `osm_cluster_network_cidr` and having ports security disabled. +Use the `openstack_private_data_network_name` variable to define the network +name for the heat stack resource. + +After the cluster deployment done, you should run an additional post installation +step for flannel and docker iptables configuration: + +    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-install.yml + +## Other configuration variables + +`openshift_openstack_keypair_name` is a Nova keypair - you can see your +keypairs with `openstack keypair list`. It must correspond to the +private SSH key Ansible will use to log into the created VMs. This is +`~/.ssh/id_rsa` by default, but you can use a different key by passing +`--private-key` to `ansible-playbook`. + +`openshift_openstack_default_image_name` is the default name of the Glance image the +servers will use. You can see your images with `openstack image list`. +In order to set a different image for a role, uncomment the line with the +corresponding variable (e.g. `openshift_openstack_lb_image_name` for load balancer) and +set its value to another available image name. `openshift_openstack_default_image_name` +must stay defined as it is used as a default value for the rest of the roles. + +`openshift_openstack_default_flavor` is the default Nova flavor the servers will use. +You can see your flavors with `openstack flavor list`. +In order to set a different flavor for a role, uncomment the line with the +corresponding variable (e.g. `openshift_openstack_lb_flavor` for load balancer) and +set its value to another available flavor. `openshift_openstack_default_flavor` must +stay defined as it is used as a default value for the rest of the roles. + +`openshift_openstack_external_network_name` is the name of the Neutron network +providing external connectivity. It is often called `public`, +`external` or `ext-net`. You can see your networks with `openstack +network list`. + +`openshift_openstack_private_network_name` is the name of the private Neutron network +providing admin/control access for ansible. It can be merged with other +cluster networks, there are no special requirements for networking. + +The `openshift_openstack_num_masters`, `openshift_openstack_num_infra` and +`openshift_openstack_num_nodes` values specify the number of Master, Infra and +App nodes to create. + +The `openshift_openstack_cluster_node_labels` defines custom labels for your openshift +cluster node groups. It currently supports app and infra node groups. +The default value of this variable sets `region: primary` to app nodes and +`region: infra` to infra nodes. +An example of setting a customised label: +``` +openshift_openstack_cluster_node_labels: +  app: +    mylabel: myvalue +``` + +The `openshift_openstack_nodes_to_remove` allows you to specify the numerical indexes +of App nodes that should be removed; for example, ['0', '2'], + +The `docker_volume_size` is the default Docker volume size the servers will use. +In order to set a different volume size for a role, +uncomment the line with the corresponding variable (e. g. `docker_master_volume_size` +for master) and change its value. `docker_volume_size` must stay defined as it is +used as a default value for some of the servers (master, infra, app node). +The rest of the roles (etcd, load balancer, dns) have their defaults hard-coded. + +**Note**: If the `openshift_openstack_ephemeral_volumes` is set to `true`, the `*_volume_size` variables +will be ignored and the deployment will not create any cinder volumes. + +The `openshift_openstack_flat_secgrp`, controls Neutron security groups creation for Heat +stacks. Set it to true, if you experience issues with sec group rules +quotas. It trades security for number of rules, by sharing the same set +of firewall rules for master, node, etcd and infra nodes. + +The `openshift_openstack_required_packages` variable also provides a list of the additional +prerequisite packages to be installed before to deploy an OpenShift cluster. +Those are ignored though, if the `manage_packages: False`. + +The `openstack_inventory` controls either a static inventory will be created after the +cluster nodes provisioned on OpenStack cloud. Note, the fully dynamic inventory +is yet to be supported, so the static inventory will be created anyway. + +The `openstack_inventory_path` points the directory to host the generated static inventory. +It should point to the copied example inventory directory, otherwise ti creates +a new one for you. + +## Multi-master configuration + +Please refer to the official documentation for the +[multi-master setup](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#multiple-masters) +and define the corresponding [inventory +variables](https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html#configuring-cluster-variables) +in `inventory/group_vars/OSEv3.yml`. For example, given a load balancer node +under the ansible group named `ext_lb`: + +    openshift_master_cluster_method: native +    openshift_master_cluster_hostname: "{{ groups.ext_lb.0 }}" +    openshift_master_cluster_public_hostname: "{{ groups.ext_lb.0 }}" + +## Provider Network + +Normally, the playbooks create a new Neutron network and subnet and attach +floating IP addresses to each node. If you have a provider network set up, this +is all unnecessary as you can just access servers that are placed in the +provider network directly. + +To use a provider network, set its name in `openshift_openstack_provider_network_name` in +`inventory/group_vars/all.yml`. + +If you set the provider network name, the `openshift_openstack_external_network_name` and +`openshift_openstack_private_network_name` fields will be ignored. + +**NOTE**: this will not update the nodes' DNS, so running openshift-ansible +right after provisioning will fail (unless you're using an external DNS server +your provider network knows about). You must make sure your nodes are able to +resolve each other by name. + +## Security notes + +Configure required `*_ingress_cidr` variables to restrict public access +to provisioned servers from your laptop (a /32 notation should be used) +or your trusted network. The most important is the `openshift_openstack_node_ingress_cidr` +that restricts public access to the deployed DNS server and cluster +nodes' ephemeral ports range. + +Note, the command ``curl https://api.ipify.org`` helps fiding an external +IP address of your box (the ansible admin node). + +There is also the `manage_packages` variable (defaults to True) you +may want to turn off in order to speed up the provisioning tasks. This may +be the case for development environments. When turned off, the servers will +be provisioned omitting the ``yum update`` command. This brings security +implications though, and is not recommended for production deployments. + +### DNS servers security options + +Aside from `openshift_openstack_node_ingress_cidr` restricting public access to in-stack DNS +servers, there are following (bind/named specific) DNS security +options available: + +    named_public_recursion: 'no' +    named_private_recursion: 'yes' + +External DNS servers, which is not included in the 'dns' hosts group, +are not managed. It is up to you to configure such ones. + +## Configure the OpenShift parameters + +Finally, you need to update the DNS entry in +`inventory/group_vars/OSEv3.yml` (look at +`openshift_master_default_subdomain`). + +In addition, this is the place where you can customise your OpenShift +installation for example by specifying the authentication. + +The full list of options is available in this sample inventory: + +https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example + +Note, that in order to deploy OpenShift origin, you should update the following +variables for the `inventory/group_vars/OSEv3.yml`, `all.yml`: + +    deployment_type: origin +    openshift_deployment_type: "{{ deployment_type }}" + + +## Setting a custom entrypoint + +In order to set a custom entrypoint, update `openshift_master_cluster_public_hostname` + +    openshift_master_cluster_public_hostname: api.openshift.example.com + +Note than an empty hostname does not work, so if your domain is `openshift.example.com`, +you cannot set this value to simply `openshift.example.com`. + +## Creating and using a Cinder volume for the OpenShift registry + +You can optionally have the playbooks create a Cinder volume and set +it up as the OpenShift hosted registry. + +To do that you need specify the desired Cinder volume name and size in +Gigabytes in `inventory/group_vars/all.yml`: + +    openshift_openstack_cinder_hosted_registry_name: cinder-registry +    openshift_openstack_cinder_hosted_registry_size_gb: 10 + +With this, the playbooks will create the volume and set up its +filesystem. If there is an existing volume of the same name, we will +use it but keep the existing data on it. + +To use the volume for the registry, you must first configure it with +the OpenStack credentials by putting the following to `OSEv3.yml`: + +    openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}" +    openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}" +    openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}" +    openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}" + +This will use the credentials from your shell environment. If you want +to enter them explicitly, you can. You can also use credentials +different from the provisioning ones (say for quota or access control +reasons). + +**NOTE**: If you're testing this on (DevStack)[devstack], you must +explicitly set your Keystone API version to v2 (e.g. +`OS_AUTH_URL=http://10.34.37.47/identity/v2.0`) instead of the default +value provided by `openrc`. You may also encounter the following issue +with Cinder: + +https://github.com/kubernetes/kubernetes/issues/50461 + +You can read the (OpenShift documentation on configuring +OpenStack)[openstack] for more information. + +[devstack]: https://docs.openstack.org/devstack/latest/ +[openstack]: https://docs.openshift.org/latest/install_config/configuring_openstack.html + + +Next, we need to instruct OpenShift to use the Cinder volume for it's +registry. Again in `OSEv3.yml`: + +    #openshift_hosted_registry_storage_kind: openstack +    #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce'] +    #openshift_hosted_registry_storage_openstack_filesystem: xfs + +The filesystem value here will be used in the initial formatting of +the volume. + +If you're using the dynamic inventory, you must uncomment these two values as +well: + +    #openshift_hosted_registry_storage_openstack_volumeID: "{{ lookup('os_cinder', openshift_openstack_cinder_hosted_registry_name).id }}" +    #openshift_hosted_registry_storage_volume_size: "{{ openshift_openstack_cinder_hosted_registry_size_gb }}Gi" + +But note that they use the `os_cinder` lookup plugin we provide, so you must +tell Ansible where to find it either in `ansible.cfg` (the one we provide is +configured properly) or by exporting the +`ANSIBLE_LOOKUP_PLUGINS=openshift-ansible-contrib/lookup_plugins` environment +variable. + + + +## Use an existing Cinder volume for the OpenShift registry + +You can also use a pre-existing Cinder volume for the storage of your +OpenShift registry. + +To do that, you need to have a Cinder volume. You can create one by +running: + +    openstack volume create --size <volume size in gb> <volume name> + +The volume needs to have a file system created before you put it to +use. + +As with the automatically-created volume, you have to set up the +OpenStack credentials in `inventory/group_vars/OSEv3.yml` as well as +registry values: + +    #openshift_hosted_registry_storage_kind: openstack +    #openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce'] +    #openshift_hosted_registry_storage_openstack_filesystem: xfs +    #openshift_hosted_registry_storage_openstack_volumeID: e0ba2d73-d2f9-4514-a3b2-a0ced507fa05 +    #openshift_hosted_registry_storage_volume_size: 10Gi + +Note the `openshift_hosted_registry_storage_openstack_volumeID` and +`openshift_hosted_registry_storage_volume_size` values: these need to +be added in addition to the previous variables. + +The **Cinder volume ID**, **filesystem** and **volume size** variables +must correspond to the values in your volume. The volume ID must be +the **UUID** of the Cinder volume, *not its name*. + +We can do formate the volume for you if you ask for it in +`inventory/group_vars/all.yml`: + +    openshift_openstack_prepare_and_format_registry_volume: true + +**NOTE:** doing so **will destroy any data that's currently on the volume**! + +You can also run the registry setup playbook directly: + +   ansible-playbook -i inventory playbooks/provisioning/openstack/prepare-and-format-cinder-volume.yaml + +(the provisioning phase must be completed, first) + + + +## Configure static inventory and access via a bastion node + +Example inventory variables: + +    openshift_openstack_use_bastion: true +    openshift_openstack_bastion_ingress_cidr: "{{openshift_openstack_subnet_prefix}}.0/24" +    openstack_private_ssh_key: ~/.ssh/id_rsa +    openstack_inventory: static +    openstack_inventory_path: ../../../../inventory +    openstack_ssh_config_path: /tmp/ssh.config.openshift.ansible.openshift.example.com + +The `openshift_openstack_subnet_prefix` is the openstack private network for your cluster. +And the `openshift_openstack_bastion_ingress_cidr` defines accepted range for SSH connections to nodes +additionally to the `openshift_openstack_ssh_ingress_cidr`` (see the security notes above). + +The SSH config will be stored on the ansible control node by the +gitven path. Ansible uses it automatically. To access the cluster nodes with +that ssh config, use the `-F` prefix, f.e.: + +    ssh -F /tmp/ssh.config.openshift.ansible.openshift.example.com master-0.openshift.example.com echo OK + +Note, relative paths will not work for the `openstack_ssh_config_path`, but it +works for the `openstack_private_ssh_key` and `openstack_inventory_path`. In this +guide, the latter points to the current directory, where you run ansible commands +from. + +To verify nodes connectivity, use the command: + +    ansible -v -i inventory/hosts -m ping all + +If something is broken, double-check the inventory variables, paths and the +generated `<openstack_inventory_path>/hosts` and `openstack_ssh_config_path` files. + +The `inventory: dynamic` can be used instead to access cluster nodes directly via +floating IPs. In this mode you can not use a bastion node and should specify +the dynamic inventory file in your ansible commands , like `-i openstack.py`. + +## Using Docker on the Ansible host + +If you don't want to worry about the dependencies, you can use the +[OpenStack Control Host image][control-host-image]. + +[control-host-image]: https://hub.docker.com/r/redhatcop/control-host-openstack/ + +It has all the dependencies installed, but you'll need to map your +code and credentials to it. Assuming your SSH keys live in `~/.ssh` +and everything else is in your current directory (i.e. `ansible.cfg`, +`keystonerc`, `inventory`, `openshift-ansible`, +`openshift-ansible-contrib`), this is how you run the deployment: + +    sudo docker run -it -v ~/.ssh:/mnt/.ssh:Z \ +        -v $PWD:/root/openshift:Z \ +        -v $PWD/keystonerc:/root/.config/openstack/keystonerc.sh:Z \ +        redhatcop/control-host-openstack bash + +(feel free to replace `$PWD` with an actual path to your inventory and +checkouts, but note that relative paths don't work) + +The first run may take a few minutes while the image is being +downloaded. After that, you'll be inside the container and you can run +the playbooks: + +    cd openshift +    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml + + +### Run the playbook + +Assuming your OpenStack (Keystone) credentials are in the `keystonerc` +this is how you stat the provisioning process from your ansible control node: + +    . keystonerc +    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml + +Note, here you start with an empty inventory. The static inventory will be populated +with data so you can omit providing additional arguments for future ansible commands. + +If bastion enabled, the generates SSH config must be applied for ansible. +Otherwise, it is auto included by the previous step. In order to execute it +as a separate playbook, use the following command: + +    ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-provision-openstack.yml + +The first infra node then becomes a bastion node as well and proxies access +for future ansible commands. The post-provision step also configures Satellite, +if requested, and DNS server, and ensures other OpenShift requirements to be met. + + +## Running Custom Post-Provision Actions + +A custom playbook can be run like this: + +``` +ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml +``` + +If you'd like to limit the run to one particular host, you can do so as follows: + +``` +ansible-playbook --private-key ~/.ssh/openshift -i inventory/ openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/custom-playbook.yml -l app-node-0.openshift.example.com +``` + +You can also create your own custom playbook. Here are a few examples: + +### Adding additional YUM repositories + +``` +--- +- hosts: app +  tasks: + +  # enable EPL +  - name: Add repository +    yum_repository: +      name: epel +      description: EPEL YUM repo +      baseurl: https://download.fedoraproject.org/pub/epel/$releasever/$basearch/ +``` + +This example runs against app nodes. The list of options include: + +  - cluster_hosts (all hosts: app, infra, masters, dns, lb) +  - OSEv3 (app, infra, masters) +  - app +  - dns +  - masters +  - infra_hosts + +### Attaching additional RHN pools + +``` +--- +- hosts: cluster_hosts +  tasks: +  - name: Attach additional RHN pool +    become: true +    command: "/usr/bin/subscription-manager attach --pool=<pool ID>" +    register: attach_rhn_pool_result +    until: attach_rhn_pool_result.rc == 0 +    retries: 10 +    delay: 1 +``` + +This playbook runs against all cluster nodes. In order to help prevent slow connectivity +problems, the task is retried 10 times in case of initial failure. +Note that in order for this example to work in your deployment, your servers must use the RHEL image. + +### Adding extra Docker registry URLs + +This playbook is located in the [custom-actions](https://github.com/openshift/openshift-ansible-contrib/tree/master/playbooks/provisioning/openstack/custom-actions) directory. + +It adds URLs passed as arguments to the docker configuration program. +Going into more detail, the configuration program (which is in the YAML format) is loaded into an ansible variable +([lines 27-30](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L27-L30)) +and in its structure, `registries` and `insecure_registries` sections are expanded with the newly added items +([lines 56-76](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L56-L76)). +The new content is then saved into the original file +([lines 78-82](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml#L78-L82)) +and docker is restarted. + +Example usage: +``` +ansible-playbook -i <inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml  --extra-vars '{"registries": "reg1", "insecure_registries": ["ins_reg1","ins_reg2"]}' +``` + +### Adding extra CAs to the trust chain + +This playbook is also located in the [custom-actions](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions) directory. +It copies passed CAs to the trust chain location and updates the trust chain on each selected host. + +Example usage: +``` +ansible-playbook -i <inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions/add-cas.yml --extra-vars '{"ca_files": [<absolute path to ca1 file>, <absolute path to ca2 file>]}' +``` + +Please consider contributing your custom playbook back to openshift-ansible-contrib! + +A library of custom post-provision actions exists in `openshift-ansible-contrib/playbooks/provisioning/openstack/custom-actions`. Playbooks include: + +* [add-yum-repos.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-yum-repos.yml): adds a list of custom yum repositories to every node in the cluster +* [add-rhn-pools.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml): attaches a list of additional RHN pools to every node in the cluster +* [add-docker-registry.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-docker-registry.yml): adds a list of docker registries to the docker configuration on every node in the cluster +* [add-cas.yml](https://github.com/openshift/openshift-ansible-contrib/blob/master/playbooks/provisioning/openstack/custom-actions/add-rhn-pools.yml): adds a list of CAs to the trust chain on every node in the cluster + + +## Install OpenShift + +Once it succeeds, you can install openshift by running: + +    ansible-playbook openshift-ansible/playbooks/byo/config.yml + +## Access UI + +OpenShift UI may be accessed via the 1st master node FQDN, port 8443. + +When using a bastion, you may want to make an SSH tunnel from your control node +to access UI on the `https://localhost:8443`, with this inventory variable: + +   openshift_openstack_ui_ssh_tunnel: True + +Note, this requires sudo rights on the ansible control node and an absolute path +for the `openstack_private_ssh_key`. You should also update the control node's +`/etc/hosts`: + +    127.0.0.1 master-0.openshift.example.com + +In order to access UI, the ssh-tunnel service will be created and started on the +control node. Make sure to remove these changes and the service manually, when not +needed anymore. + +## Scale Deployment up/down + +### Scaling up + +One can scale up the number of application nodes by executing the ansible playbook +`openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml`. +This process can be done even if there is currently no deployment available. +The `increment_by` variable is used to specify by how much the deployment should +be scaled up (if none exists, it serves as a target number of application nodes). +The path to `openshift-ansible` directory can be customised by the `openshift_ansible_dir` +variable. Its value must be an absolute path to `openshift-ansible` and it cannot +contain the '/' symbol at the end. + +Usage: + +``` +ansible-playbook -i <path to inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml` [-e increment_by=<number>] [-e openshift_ansible_dir=<path to openshift-ansible>] +``` + +Note: This playbook works only without a bastion node (`openshift_openstack_use_bastion: False`). diff --git a/playbooks/openstack/openshift-cluster/install.yml b/playbooks/openstack/openshift-cluster/install.yml new file mode 100644 index 000000000..40d4767ba --- /dev/null +++ b/playbooks/openstack/openshift-cluster/install.yml @@ -0,0 +1,18 @@ +--- +# NOTE(shadower): the AWS playbook builds an in-memory inventory of +# all the EC2 instances here. We don't need to as that's done by the +# dynamic inventory. + +# TODO(shadower): the AWS playbook sets the +# `openshift_master_cluster_hostname` and `osm_custom_cors_origins` +# values here. We do it in the OSEv3 group vars. Do we need to add +# some logic here? + +- name: normalize groups +  include: ../../byo/openshift-cluster/initialize_groups.yml + +- name: run the std_include +  include: ../../common/openshift-cluster/std_include.yml + +- name: run the config +  include: ../../common/openshift-cluster/config.yml diff --git a/playbooks/openstack/openshift-cluster/prerequisites.yml b/playbooks/openstack/openshift-cluster/prerequisites.yml new file mode 100644 index 000000000..0356b37dd --- /dev/null +++ b/playbooks/openstack/openshift-cluster/prerequisites.yml @@ -0,0 +1,12 @@ +--- +- hosts: localhost +  tasks: +  - name: Check dependencies and OpenStack prerequisites +    include_role: +      name: openshift_openstack +      tasks_from: check-prerequisites.yml + +  - name: Check network configuration +    include_role: +      name: openshift_openstack +      tasks_from: net_vars_check.yaml diff --git a/playbooks/openstack/openshift-cluster/provision.yml b/playbooks/openstack/openshift-cluster/provision.yml new file mode 100644 index 000000000..fe3057158 --- /dev/null +++ b/playbooks/openstack/openshift-cluster/provision.yml @@ -0,0 +1,61 @@ +--- +- name: Create the OpenStack resources for cluster installation +  hosts: localhost +  tasks: +  - name: provision cluster +    include_role: +      name: openshift_openstack +      tasks_from: provision.yml + + +# NOTE(shadower): Bring in the host groups: +- name: normalize groups +  include: ../../byo/openshift-cluster/initialize_groups.yml +- name: evaluate groups +  include: ../../common/openshift-cluster/evaluate_groups.yml + + +- name: Wait for the nodes and gather their facts +  hosts: oo_all_hosts +  become: yes +  # NOTE: The nodes may not be up yet, don't gather facts here. +  # They'll be collected after `wait_for_connection`. +  gather_facts: no +  tasks: +  - name: Wait for the the nodes to come up +    wait_for_connection: + +  - name: Gather facts for the new nodes +    setup: + + +# NOTE(shadower): the (internal) DNS must be functional at this point!! +# That will have happened in provision.yml if nsupdate was configured. + +# TODO(shadower): consider splitting this up so people can stop here +# and configure their DNS if they have to. +- name: Populate the DNS entries +  hosts: localhost +  tasks: +  - name: Populate DNS entries +    include_role: +      name: openshift_openstack +      tasks_from: populate-dns.yml +    when: +    - openshift_openstack_external_nsupdate_keys is defined +    - openshift_openstack_external_nsupdate_keys.private is defined or openshift_openstack_external_nsupdate_keys.public is defined + +- name: Prepare the Nodes in the cluster for installation +  hosts: oo_all_hosts +  become: yes +  gather_facts: yes +  tasks: +  - name: Install dependencies +    include_role: +      name: openshift_openstack +      tasks_from: node-packages.yml + +  - name: Configure Node +    include_role: +      name: openshift_openstack +      tasks_from: node-configuration.yml diff --git a/playbooks/openstack/openshift-cluster/provision_install.yml b/playbooks/openstack/openshift-cluster/provision_install.yml new file mode 100644 index 000000000..5d88c105f --- /dev/null +++ b/playbooks/openstack/openshift-cluster/provision_install.yml @@ -0,0 +1,9 @@ +--- +- name: Check the prerequisites for cluster provisioning in OpenStack +  include: prerequisites.yml + +- name: Include the provision.yml playbook to create cluster +  include: provision.yml + +- name: Include the install.yml playbook to install cluster +  include: install.yml diff --git a/playbooks/openstack/openshift-cluster/roles b/playbooks/openstack/openshift-cluster/roles new file mode 120000 index 000000000..e2b799b9d --- /dev/null +++ b/playbooks/openstack/openshift-cluster/roles @@ -0,0 +1 @@ +../../../roles/
\ No newline at end of file diff --git a/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml b/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml new file mode 100644 index 000000000..1e55adb9e --- /dev/null +++ b/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml @@ -0,0 +1,59 @@ +--- +openshift_deployment_type: origin +#openshift_deployment_type: openshift-enterprise +#openshift_release: v3.5 +openshift_master_default_subdomain: "apps.{{ openshift_openstack_clusterid }}.{{ openshift_openstack_public_dns_domain }}" + +openshift_master_cluster_method: native +openshift_master_cluster_hostname: "console.{{ openshift_openstack_clusterid }}.{{ openshift_openstack_public_dns_domain }}" +openshift_master_cluster_public_hostname: "{{ openshift_master_cluster_hostname }}" + +osm_default_node_selector: 'region=primary' + +openshift_hosted_router_wait: True +openshift_hosted_registry_wait: True + +## Openstack credentials +#openshift_cloudprovider_kind=openstack +#openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}" +#openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}" +#openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}" +#openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}" +#openshift_cloudprovider_openstack_region="{{ lookup('env', 'OS_REGION_NAME') }}" + + +## Use Cinder volume for Openshift registry: +#openshift_hosted_registry_storage_kind: openstack +#openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce'] +#openshift_hosted_registry_storage_openstack_filesystem: xfs + +## NOTE(shadower): This won't work until the openshift-ansible issue #5657 is fixed: +## https://github.com/openshift/openshift-ansible/issues/5657 +## If you're using the `openshift_openstack_cinder_hosted_registry_name` option from +## `all.yml`, uncomment these lines: +#openshift_hosted_registry_storage_openstack_volumeID: "{{ lookup('os_cinder', openshift_openstack_cinder_hosted_registry_name).id }}" +#openshift_hosted_registry_storage_volume_size: "{{ openshift_openstack_cinder_hosted_registry_size_gb }}Gi" + +## If you're using a Cinder volume you've set up yourself, uncomment these lines: +#openshift_hosted_registry_storage_openstack_volumeID: e0ba2d73-d2f9-4514-a3b2-a0ced507fa05 +#openshift_hosted_registry_storage_volume_size: 10Gi + + +# NOTE(shadower): the hostname check seems to always fail because the +# host's floating IP address doesn't match the address received from +# inside the host. +openshift_override_hostname_check: true + +# For POCs or demo environments that are using smaller instances than +# the official recommended values for RAM and DISK, uncomment the line below. +#openshift_disable_check: disk_availability,memory_availability + +# NOTE(shadower): Always switch to root on the OSEv3 nodes. +# openshift-ansible requires an explicit `become`. +ansible_become: true + +# # Flannel networking +#osm_cluster_network_cidr: 10.128.0.0/14 +#openshift_use_openshift_sdn: false +#openshift_use_flannel: true +#flannel_interface: eth1 diff --git a/playbooks/openstack/sample-inventory/group_vars/all.yml b/playbooks/openstack/sample-inventory/group_vars/all.yml new file mode 100644 index 000000000..921edb867 --- /dev/null +++ b/playbooks/openstack/sample-inventory/group_vars/all.yml @@ -0,0 +1,147 @@ +--- +openshift_openstack_clusterid: "openshift" +openshift_openstack_public_dns_domain: "example.com" +openshift_openstack_dns_nameservers: [] + +# # Used Hostnames +# # - set custom hostnames for roles by uncommenting corresponding lines +#openshift_openstack_master_hostname: "master" +#openshift_openstack_infra_hostname: "infra-node" +#openshift_openstack_node_hostname: "app-node" +#openshift_openstack_lb_hostname: "lb" +#openshift_openstack_etcd_hostname: "etcd" +#openshift_openstack_dns_hostname: "dns" + +openshift_openstack_keypair_name: "openshift" +openshift_openstack_external_network_name: "public" +#openshift_openstack_private_network_name:  "openshift-ansible-{{ openshift_openstack_stack_name }}-net" +# # A dedicated Neutron network name for containers data network +# # Configures the data network to be separated from openshift_openstack_private_network_name +# # NOTE: this is only supported with Flannel SDN yet +#openstack_private_data_network_name: "openshift-ansible-{{ openshift_openstack_stack_name }}-data-net" + +## If you want to use a provider network, set its name here. +## NOTE: the `openshift_openstack_external_network_name` and +## `openshift_openstack_private_network_name` options will be ignored when using a +## provider network. +#openshift_openstack_provider_network_name: "provider" + +# # Used Images +# # - set specific images for roles by uncommenting corresponding lines +# # - note: do not remove openshift_openstack_default_image_name definition +#openshift_openstack_master_image_name: "centos7" +#openshift_openstack_infra_image_name: "centos7" +#openshift_openstack_node_image_name: "centos7" +#openshift_openstack_lb_image_name: "centos7" +#openshift_openstack_etcd_image_name: "centos7" +#openshift_openstack_dns_image_name: "centos7" +openshift_openstack_default_image_name: "centos7" + +openshift_openstack_num_masters: 1 +openshift_openstack_num_infra: 1 +openshift_openstack_num_nodes: 2 + +# # Used Flavors +# # - set specific flavors for roles by uncommenting corresponding lines +# # - note: do note remove openshift_openstack_default_flavor definition +#openshift_openstack_master_flavor: "m1.medium" +#openshift_openstack_infra_flavor: "m1.medium" +#openshift_openstack_node_flavor: "m1.medium" +#openshift_openstack_lb_flavor: "m1.medium" +#openshift_openstack_etcd_flavor: "m1.medium" +#openshift_openstack_dns_flavor: "m1.medium" +openshift_openstack_default_flavor: "m1.medium" + +# # Numerical index of nodes to remove +# openshift_openstack_nodes_to_remove: [] + +# # Docker volume size +# # - set specific volume size for roles by uncommenting corresponding lines +# # - note: do not remove docker_default_volume_size definition +#openshift_openstack_docker_master_volume_size: "15" +#openshift_openstack_docker_infra_volume_size: "15" +#openshift_openstack_docker_node_volume_size: "15" +#openshift_openstack_docker_etcd_volume_size: "2" +#openshift_openstack_docker_dns_volume_size: "1" +#openshift_openstack_docker_lb_volume_size: "5" +openshift_openstack_docker_volume_size: "15" + +## Specify server group policies for master and infra nodes. Nova must be configured to +## enable these policies. 'anti-affinity' will ensure that each VM is launched on a +## different physical host. +#openshift_openstack_master_server_group_policies: [anti-affinity] +#openshift_openstack_infra_server_group_policies: [anti-affinity] + +## Create a Cinder volume and use it for the OpenShift registry. +## NOTE: the openstack credentials and hosted registry options must be set in OSEv3.yml! +#openshift_openstack_cinder_hosted_registry_name: cinder-registry +#openshift_openstack_cinder_hosted_registry_size_gb: 10 + +## Set up a filesystem on the cinder volume specified in `OSEv3.yaml`. +## You need to specify the file system and volume ID in OSEv3 via +## `openshift_hosted_registry_storage_openstack_filesystem` and +## `openshift_hosted_registry_storage_openstack_volumeID`. +## WARNING: This will delete any data on the volume! +#openshift_openstack_prepare_and_format_registry_volume: False + +openshift_openstack_subnet_prefix: "192.168.99" + +## Red Hat subscription defaults to false which means we will not attempt to +## subscribe the nodes +#rhsm_register: False + +# # Using Red Hat Satellite: +#rhsm_register: True +#rhsm_satellite: 'sat-6.example.com' +#rhsm_org: 'OPENSHIFT_ORG' +#rhsm_activationkey: '<activation-key>' + +# # Or using RHN username, password and optionally pool: +#rhsm_register: True +#rhsm_username: '<username>' +#rhsm_password: '<password>' +#rhsm_pool: '<pool id>' + +#rhsm_repos: +# - "rhel-7-server-rpms" +# - "rhel-7-server-ose-3.5-rpms" +# - "rhel-7-server-extras-rpms" +# - "rhel-7-fast-datapath-rpms" + + +# # Roll-your-own DNS +#openshift_openstack_num_dns: 0 +#openshift_openstack_external_nsupdate_keys: +#  public: +#    key_secret: 'SKqKNdpfk7llKxZ57bbxUnUDobaaJp9t8CjXLJPl+fRI5mPcSBuxTAyvJPa6Y9R7vUg9DwCy/6WTpgLNqnV4Hg==' +#    key_algorithm: 'hmac-md5' +#    server: '192.168.1.1' +#  private: +#    key_secret: 'kVE2bVTgZjrdJipxPhID8BEZmbHD8cExlVPR+zbFpW6la8kL5wpXiwOh8q5AAosXQI5t95UXwq3Inx8QT58duw==' +#    key_algorithm: 'hmac-md5' +#    server: '192.168.1.2' + +# # Customize DNS server security options +#named_public_recursion: 'no' +#named_private_recursion: 'yes' + + +# NOTE(shadower): Do not change this value. The Ansible user is currently +# hardcoded to `openshift`. +ansible_user: openshift + +# # Use a single security group for a cluster (default: false) +#openshift_openstack_flat_secgrp: false + +# If you want to use the VM storage instead of Cinder volumes, set this to `true`. +# NOTE: this is for testing only! Your data will be gone once the VM disappears! +# openshift_openstack_ephemeral_volumes: false + +# # OpenShift node labels +# # - in order to customise node labels for app and/or infra group, set the +# #   openshift_openstack_cluster_node_labels variable +#openshift_openstack_cluster_node_labels: +#  app: +#    region: primary +#  infra: +#    region: infra diff --git a/playbooks/openstack/sample-inventory/inventory.py b/playbooks/openstack/sample-inventory/inventory.py new file mode 100755 index 000000000..47c56d94d --- /dev/null +++ b/playbooks/openstack/sample-inventory/inventory.py @@ -0,0 +1,96 @@ +#!/usr/bin/env python +""" +This is an Ansible dynamic inventory for OpenStack. + +It requires your OpenStack credentials to be set in clouds.yaml or your shell +environment. + +""" + +from __future__ import print_function + +import json + +import shade + + +def build_inventory(): +    '''Build the dynamic inventory.''' +    cloud = shade.openstack_cloud() + +    inventory = {} + +    # TODO(shadower): filter the servers based on the `OPENSHIFT_CLUSTER` +    # environment variable. +    cluster_hosts = [ +        server for server in cloud.list_servers() +        if 'metadata' in server and 'clusterid' in server.metadata] + +    masters = [server.name for server in cluster_hosts +               if server.metadata['host-type'] == 'master'] + +    etcd = [server.name for server in cluster_hosts +            if server.metadata['host-type'] == 'etcd'] +    if not etcd: +        etcd = masters + +    infra_hosts = [server.name for server in cluster_hosts +                   if server.metadata['host-type'] == 'node' and +                   server.metadata['sub-host-type'] == 'infra'] + +    app = [server.name for server in cluster_hosts +           if server.metadata['host-type'] == 'node' and +           server.metadata['sub-host-type'] == 'app'] + +    nodes = list(set(masters + infra_hosts + app)) + +    dns = [server.name for server in cluster_hosts +           if server.metadata['host-type'] == 'dns'] + +    load_balancers = [server.name for server in cluster_hosts +                      if server.metadata['host-type'] == 'lb'] + +    osev3 = list(set(nodes + etcd + load_balancers)) + +    inventory['cluster_hosts'] = {'hosts': [s.name for s in cluster_hosts]} +    inventory['OSEv3'] = {'hosts': osev3} +    inventory['masters'] = {'hosts': masters} +    inventory['etcd'] = {'hosts': etcd} +    inventory['nodes'] = {'hosts': nodes} +    inventory['infra_hosts'] = {'hosts': infra_hosts} +    inventory['app'] = {'hosts': app} +    inventory['dns'] = {'hosts': dns} +    inventory['lb'] = {'hosts': load_balancers} + +    for server in cluster_hosts: +        if 'group' in server.metadata: +            group = server.metadata.group +            if group not in inventory: +                inventory[group] = {'hosts': []} +            inventory[group]['hosts'].append(server.name) + +    inventory['_meta'] = {'hostvars': {}} + +    for server in cluster_hosts: +        ssh_ip_address = server.public_v4 or server.private_v4 +        hostvars = { +            'ansible_host': ssh_ip_address +        } + +        public_v4 = server.public_v4 or server.private_v4 +        if public_v4: +            hostvars['public_v4'] = public_v4 +        # TODO(shadower): what about multiple networks? +        if server.private_v4: +            hostvars['private_v4'] = server.private_v4 + +        node_labels = server.metadata.get('node_labels') +        if node_labels: +            hostvars['openshift_node_labels'] = node_labels + +        inventory['_meta']['hostvars'][server.name] = hostvars +    return inventory + + +if __name__ == '__main__': +    print(json.dumps(build_inventory(), indent=4, sort_keys=True)) diff --git a/requirements.txt b/requirements.txt index 5bc29f193..be1bde18e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -7,4 +7,5 @@ pyOpenSSL==16.2.0  # We need to disable ruamel.yaml for now because of test failures  #ruamel.yaml  six==1.10.0 +shade==1.24.0  passlib==1.6.5 diff --git a/roles/openshift_openstack/defaults/main.yml b/roles/openshift_openstack/defaults/main.yml new file mode 100644 index 000000000..5f182e0d6 --- /dev/null +++ b/roles/openshift_openstack/defaults/main.yml @@ -0,0 +1,96 @@ +--- +openshift_openstack_stack_state: 'present' + +openshift_openstack_ssh_ingress_cidr: 0.0.0.0/0 +openshift_openstack_node_ingress_cidr: 0.0.0.0/0 +openshift_openstack_lb_ingress_cidr: 0.0.0.0/0 +openshift_openstack_bastion_ingress_cidr: 0.0.0.0/0 +openshift_openstack_num_etcd: 0 +openshift_openstack_num_masters: 1 +openshift_openstack_num_nodes: 1 +openshift_openstack_num_dns: 0 +openshift_openstack_num_infra: 1 +openshift_openstack_dns_nameservers: [] +openshift_openstack_nodes_to_remove: [] + + +openshift_openstack_cluster_node_labels: +  app: +    region: primary +  infra: +    region: infra + +openshift_openstack_install_debug_packages: false +openshift_openstack_required_packages: +  - docker +  - NetworkManager +  - wget +  - git +  - net-tools +  - bind-utils +  - bridge-utils +openshift_openstack_debug_packages: +  - bash-completion +  - vim-enhanced + +# container-storage-setup +openshift_openstack_container_storage_setup: +  docker_dev: "/dev/sdb" +  docker_vg: "docker-vol" +  docker_data_size: "95%VG" +  docker_dm_basesize: "3G" +  container_root_lv_name: "dockerlv" +  container_root_lv_mount_path: "/var/lib/docker" + + +# populate-dns +openshift_openstack_dns_records_add: [] +openshift_openstack_external_nsupdate_keys: {} + +openshift_openstack_full_dns_domain: "{{ (openshift_openstack_clusterid|trim == '') | ternary(openshift_openstack_public_dns_domain, openshift_openstack_clusterid + '.' + openshift_openstack_public_dns_domain) }}" +openshift_openstack_app_subdomain: "apps" + + +# heat vars +openshift_openstack_clusterid: openshift +openshift_openstack_stack_name: "{{ openshift_openstack_clusterid }}.{{ openshift_openstack_public_dns_domain }}" +openshift_openstack_subnet_prefix: "192.168.99" +openshift_openstack_master_hostname: master +openshift_openstack_infra_hostname: infra-node +openshift_openstack_node_hostname: app-node +openshift_openstack_lb_hostname: lb +openshift_openstack_etcd_hostname: etcd +openshift_openstack_dns_hostname: dns +openshift_openstack_keypair_name: openshift +openshift_openstack_lb_flavor: "{{ openshift_openstack_default_flavor }}" +openshift_openstack_etcd_flavor: "{{ openshift_openstack_default_flavor }}" +openshift_openstack_master_flavor: "{{ openshift_openstack_default_flavor }}" +openshift_openstack_node_flavor: "{{ openshift_openstack_default_flavor }}" +openshift_openstack_infra_flavor: "{{ openshift_openstack_default_flavor }}" +openshift_openstack_dns_flavor: "{{ openshift_openstack_default_flavor }}" +openshift_openstack_master_image: "{{ openshift_openstack_default_image_name }}" +openshift_openstack_infra_image: "{{ openshift_openstack_default_image_name }}" +openshift_openstack_node_image: "{{ openshift_openstack_default_image_name }}" +openshift_openstack_lb_image: "{{ openshift_openstack_default_image_name }}" +openshift_openstack_etcd_image: "{{ openshift_openstack_default_image_name }}" +openshift_openstack_dns_image: "{{ openshift_openstack_default_image_name }}" +openshift_openstack_provider_network_name: null +openshift_openstack_external_network_name: null +openshift_openstack_private_network: >- +  {% if openshift_openstack_provider_network_name | default(None) -%} +  {{ openshift_openstack_provider_network_name }} +  {%- else -%} +  {{ openshift_openstack_private_network_name | default ('openshift-ansible-' + openshift_openstack_stack_name + '-net') }} +  {%- endif -%} +openshift_openstack_master_server_group_policies: [] +openshift_openstack_infra_server_group_policies: [] +openshift_openstack_docker_volume_size: 15 +openshift_openstack_master_volume_size: "{{ openshift_openstack_docker_volume_size }}" +openshift_openstack_infra_volume_size: "{{ openshift_openstack_docker_volume_size }}" +openshift_openstack_node_volume_size: "{{ openshift_openstack_docker_volume_size }}" +openshift_openstack_etcd_volume_size: 2 +openshift_openstack_dns_volume_size: 1 +openshift_openstack_lb_volume_size: 5 +openshift_openstack_use_bastion: false +openshift_openstack_ui_ssh_tunnel: false +openshift_openstack_ephemeral_volumes: false diff --git a/roles/openshift_openstack/tasks/check-prerequisites.yml b/roles/openshift_openstack/tasks/check-prerequisites.yml new file mode 100644 index 000000000..57c7238d1 --- /dev/null +++ b/roles/openshift_openstack/tasks/check-prerequisites.yml @@ -0,0 +1,105 @@ +--- +# Check ansible +- name: Check Ansible version +  assert: +    that: > +      (ansible_version.major == 2 and ansible_version.minor >= 3) or +      (ansible_version.major > 2) +    msg: "Ansible version must be at least 2.3" + +# Check shade +- name: Try to import python module shade +  command: python -c "import shade" +  ignore_errors: yes +  register: shade_result +- name: Check if shade is installed +  assert: +    that: 'shade_result.rc == 0' +    msg: "Python module shade is not installed" + +# Check jmespath +- name: Try to import python module shade +  command: python -c "import jmespath" +  ignore_errors: yes +  register: jmespath_result +- name: Check if jmespath is installed +  assert: +    that: 'jmespath_result.rc == 0' +    msg: "Python module jmespath is not installed" + +# Check python-dns +- name: Try to import python DNS module +  command: python -c "import dns" +  ignore_errors: yes +  register: pythondns_result +- name: Check if python-dns is installed +  assert: +    that: 'pythondns_result.rc == 0' +    msg: "Python module python-dns is not installed" + +# Check jinja2 +- name: Try to import jinja2 module +  command: python -c "import jinja2" +  ignore_errors: yes +  register: jinja_result +- name: Check if jinja2 is installed +  assert: +    that: 'jinja_result.rc == 0' +    msg: "Python module jinja2 is not installed" + +# Check Glance image +- name: Try to get image facts +  os_image_facts: +    image: "{{ openshift_openstack_default_image_name }}" +  register: image_result +- name: Check that image is available +  assert: +    that: "image_result.ansible_facts.openstack_image" +    msg: "Image {{ openshift_openstack_default_image_name }} is not available" + +# Check network name +- name: Try to get network facts +  os_networks_facts: +    name: "{{ openshift_openstack_external_network_name }}" +  register: network_result +  when: not openshift_openstack_provider_network_name|default(None) +- name: Check that network is available +  assert: +    that: "network_result.ansible_facts.openstack_networks" +    msg: "Network {{ openshift_openstack_external_network_name }} is not available" +  when: not openshift_openstack_provider_network_name|default(None) + +# Check keypair +# TODO kpilatov: there is no Ansible module for getting OS keypairs +#                (os_keypair is not suitable for this) +#                this method does not force python-openstackclient dependency +- name: Try to show keypair +  command: > +           python -c 'import shade; cloud = shade.openstack_cloud(); +           exit(cloud.get_keypair("{{ openshift_openstack_keypair_name }}") is None)' +  ignore_errors: yes +  register: key_result +- name: Check that keypair is available +  assert: +    that: 'key_result.rc == 0' +    msg: "Keypair {{ openshift_openstack_keypair_name }} is not available" + +# Check that custom images are available +- include: custom_image_check.yaml +  with_items: +  - "{{ openshift_openstack_master_image }}" +  - "{{ openshift_openstack_infra_image }}" +  - "{{ openshift_openstack_node_image }}" +  - "{{ openshift_openstack_lb_image }}" +  - "{{ openshift_openstack_etcd_image }}" +  - "{{ openshift_openstack_dns_image }}" + +# Check that custom flavors are available +- include: custom_flavor_check.yaml +  with_items: +  - "{{ openshift_openstack_master_flavor }}" +  - "{{ openshift_openstack_infra_flavor }}" +  - "{{ openshift_openstack_node_flavor }}" +  - "{{ openshift_openstack_lb_flavor }}" +  - "{{ openshift_openstack_etcd_flavor }}" +  - "{{ openshift_openstack_dns_flavor }}" diff --git a/roles/openshift_openstack/tasks/cleanup.yml b/roles/openshift_openstack/tasks/cleanup.yml new file mode 100644 index 000000000..258334a6b --- /dev/null +++ b/roles/openshift_openstack/tasks/cleanup.yml @@ -0,0 +1,6 @@ +--- + +- name: cleanup temp files +  file: +    path: "{{ stack_template_pre.path }}" +    state: absent diff --git a/roles/openshift_openstack/tasks/container-storage-setup.yml b/roles/openshift_openstack/tasks/container-storage-setup.yml new file mode 100644 index 000000000..82307b208 --- /dev/null +++ b/roles/openshift_openstack/tasks/container-storage-setup.yml @@ -0,0 +1,37 @@ +--- +- block: +    - name: create the docker-storage config file +      template: +        src: docker-storage-setup-overlayfs.j2 +        dest: /etc/sysconfig/docker-storage-setup +        owner: root +        group: root +        mode: 0644 +  when: +    - ansible_distribution_version | version_compare('7.4', '>=') +    - ansible_distribution == "RedHat" + +- block: +    - name: create the docker-storage-setup config file +      template: +        src: docker-storage-setup-dm.j2 +        dest: /etc/sysconfig/docker-storage-setup +        owner: root +        group: root +        mode: 0644 +  when: +    - ansible_distribution_version | version_compare('7.4', '<') +    - ansible_distribution == "RedHat" + +- block: +    - name: create the docker-storage-setup config file for CentOS +      template: +        src: docker-storage-setup-dm.j2 +        dest: /etc/sysconfig/docker-storage-setup +        owner: root +        group: root +        mode: 0644 + +  # TODO(shadower): Find out which CentOS version supports overlayfs2 +  when: +    - ansible_distribution == "CentOS" diff --git a/roles/openshift_openstack/tasks/custom_flavor_check.yaml b/roles/openshift_openstack/tasks/custom_flavor_check.yaml new file mode 100644 index 000000000..5fb7a76ff --- /dev/null +++ b/roles/openshift_openstack/tasks/custom_flavor_check.yaml @@ -0,0 +1,10 @@ +--- +- name: Try to get flavor facts +  os_flavor_facts: +    name: "{{ item }}" +  register: flavor_result + +- name: Check that custom flavor is available +  assert: +    that: "flavor_result.ansible_facts.openstack_flavors" +    msg: "Flavor {{ item }} is not available." diff --git a/roles/openshift_openstack/tasks/custom_image_check.yaml b/roles/openshift_openstack/tasks/custom_image_check.yaml new file mode 100644 index 000000000..4ae163406 --- /dev/null +++ b/roles/openshift_openstack/tasks/custom_image_check.yaml @@ -0,0 +1,10 @@ +--- +- name: Try to get image facts +  os_image_facts: +    image: "{{ item }}" +  register: image_result + +- name: Check that custom image is available +  assert: +    that: "image_result.ansible_facts.openstack_image" +    msg: "Image {{ item }} is not available." diff --git a/roles/openshift_openstack/tasks/generate-templates.yml b/roles/openshift_openstack/tasks/generate-templates.yml new file mode 100644 index 000000000..3a8b588e9 --- /dev/null +++ b/roles/openshift_openstack/tasks/generate-templates.yml @@ -0,0 +1,29 @@ +--- +- name: create HOT stack template prefix +  register: stack_template_pre +  tempfile: +    state: directory +    prefix: openshift-ansible + +- name: set template paths +  set_fact: +    stack_template_path: "{{ stack_template_pre.path }}/stack.yaml" +    user_data_template_path: "{{ stack_template_pre.path }}/user-data" + +- name: Print out the Heat template directory +  debug: var=stack_template_pre + +- name: generate HOT stack template from jinja2 template +  template: +    src: heat_stack.yaml.j2 +    dest: "{{ stack_template_path }}" + +- name: generate HOT server template from jinja2 template +  template: +    src: heat_stack_server.yaml.j2 +    dest: "{{ stack_template_pre.path }}/server.yaml" + +- name: generate user_data from jinja2 template +  template: +    src: user_data.j2 +    dest: "{{ user_data_template_path }}" diff --git a/roles/openshift_openstack/tasks/hostname.yml b/roles/openshift_openstack/tasks/hostname.yml new file mode 100644 index 000000000..e1a18425f --- /dev/null +++ b/roles/openshift_openstack/tasks/hostname.yml @@ -0,0 +1,26 @@ +--- +- name: Setting Hostname Fact +  set_fact: +    new_hostname: "{{ custom_hostname | default(inventory_hostname_short) }}" + +- name: Setting FQDN Fact +  set_fact: +    new_fqdn: "{{ new_hostname }}.{{ openshift_openstack_full_dns_domain }}" + +- name: Setting hostname and DNS domain +  hostname: name="{{ new_fqdn }}" + +- name: Check for cloud.cfg +  stat: path=/etc/cloud/cloud.cfg +  register: cloud_cfg + +- name: Prevent cloud-init updates of hostname/fqdn (if applicable) +  lineinfile: +    dest: /etc/cloud/cloud.cfg +    state: present +    regexp: "{{ item.regexp }}" +    line: "{{ item.line }}" +  with_items: +  - { regexp: '^ - set_hostname', line: '# - set_hostname' } +  - { regexp: '^ - update_hostname', line: '# - update_hostname' } +  when: cloud_cfg.stat.exists == True diff --git a/roles/openshift_openstack/tasks/net_vars_check.yaml b/roles/openshift_openstack/tasks/net_vars_check.yaml new file mode 100644 index 000000000..18b9b21b9 --- /dev/null +++ b/roles/openshift_openstack/tasks/net_vars_check.yaml @@ -0,0 +1,14 @@ +--- +- name: Check the provider network configuration +  fail: +    msg: "Flannel SDN requires a dedicated containers data network and can not work over a provider network" +  when: +    - openshift_openstack_provider_network_name is defined +    - openstack_private_data_network_name is defined + +- name: Check the flannel network configuration +  fail: +    msg: "A dedicated containers data network is only supported with Flannel SDN" +  when: +    - openstack_private_data_network_name is defined +    - not openshift_use_flannel|default(False)|bool diff --git a/roles/openshift_openstack/tasks/node-configuration.yml b/roles/openshift_openstack/tasks/node-configuration.yml new file mode 100644 index 000000000..89e58d830 --- /dev/null +++ b/roles/openshift_openstack/tasks/node-configuration.yml @@ -0,0 +1,11 @@ +--- +- name: "Verify SELinux is enforcing" +  fail: +    msg: "SELinux is required for OpenShift and has been detected as '{{ ansible_selinux.config_mode }}'" +  when: ansible_selinux.config_mode != "enforcing" + +- include: hostname.yml + +- include: container-storage-setup.yml + +- include: node-network.yml diff --git a/roles/openshift_openstack/tasks/node-network.yml b/roles/openshift_openstack/tasks/node-network.yml new file mode 100644 index 000000000..f494e5158 --- /dev/null +++ b/roles/openshift_openstack/tasks/node-network.yml @@ -0,0 +1,19 @@ +--- +- name: configure NetworkManager +  lineinfile: +    dest: "/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4['interface'] }}" +    regexp: '^{{ item }}=' +    line: '{{ item }}=yes' +    state: present +    create: yes +  with_items: +  - 'USE_PEERDNS' +  - 'NM_CONTROLLED' + +- name: enable and start NetworkManager +  service: +    name: NetworkManager +    state: restarted +    enabled: yes + +# TODO(shadower): add the flannel interface tasks from post-provision-openstack.yml diff --git a/roles/openshift_openstack/tasks/node-packages.yml b/roles/openshift_openstack/tasks/node-packages.yml new file mode 100644 index 000000000..7864f5269 --- /dev/null +++ b/roles/openshift_openstack/tasks/node-packages.yml @@ -0,0 +1,15 @@ +--- +# TODO: subscribe to RHEL and install docker and other packages here + +- name: Install required packages +  yum: +    name: "{{ item }}" +    state: latest +  with_items: "{{ openshift_openstack_required_packages }}" + +- name: Install debug packages (optional) +  yum: +    name: "{{ item }}" +    state: latest +  with_items: "{{ openshift_openstack_debug_packages }}" +  when: openshift_openstack_install_debug_packages|bool diff --git a/roles/openshift_openstack/tasks/populate-dns.yml b/roles/openshift_openstack/tasks/populate-dns.yml new file mode 100644 index 000000000..c03aceb94 --- /dev/null +++ b/roles/openshift_openstack/tasks/populate-dns.yml @@ -0,0 +1,128 @@ +--- +- name: "Generate list of private A records" +  set_fact: +    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': hostvars[item]['ansible_hostname'], 'ip': hostvars[item]['private_v4'] } ] }}" +  with_items: "{{ groups['cluster_hosts'] }}" + +- name: "Add wildcard records to the private A records for infrahosts" +  set_fact: +    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': '*.' + openshift_openstack_app_subdomain, 'ip': hostvars[item]['private_v4'] } ] }}" +  with_items: "{{ groups['infra_hosts'] }}" + +- name: "Add public master cluster hostname records to the private A records (single master)" +  set_fact: +    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(openshift_openstack_full_dns_domain, ''))[:-1], 'ip': hostvars[groups.masters[0]].private_v4 } ] }}" +  when: +    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined +    - openshift_openstack_num_masters == 1 + +- name: "Add public master cluster hostname records to the private A records (multi-master)" +  set_fact: +    private_records: "{{ private_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(openshift_openstack_full_dns_domain, ''))[:-1], 'ip': hostvars[groups.lb[0]].private_v4 } ] }}" +  when: +    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined +    - openshift_openstack_num_masters > 1 + +- name: "Set the private DNS server to use the external value (if provided)" +  set_fact: +    nsupdate_server_private: "{{ openshift_openstack_external_nsupdate_keys['private']['server'] }}" +    nsupdate_key_secret_private: "{{ openshift_openstack_external_nsupdate_keys['private']['key_secret'] }}" +    nsupdate_key_algorithm_private: "{{ openshift_openstack_external_nsupdate_keys['private']['key_algorithm'] }}" +    nsupdate_private_key_name: "{{ openshift_openstack_external_nsupdate_keys['private']['key_name']|default('private-' + openshift_openstack_full_dns_domain) }}" +  when: +    - openshift_openstack_external_nsupdate_keys is defined +    - openshift_openstack_external_nsupdate_keys['private'] is defined + + +- name: "Generate the private Add section for DNS" +  set_fact: +    private_named_records: +      - view: "private" +        zone: "{{ openshift_openstack_full_dns_domain }}" +        server: "{{ nsupdate_server_private }}" +        key_name: "{{ nsupdate_private_key_name|default('private-' + openshift_openstack_full_dns_domain) }}" +        key_secret: "{{ nsupdate_key_secret_private }}" +        key_algorithm: "{{ nsupdate_key_algorithm_private | lower }}" +        entries: "{{ private_records }}" + +- name: "Generate list of public A records" +  set_fact: +    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': hostvars[item]['ansible_hostname'], 'ip': hostvars[item]['public_v4'] } ] }}" +  with_items: "{{ groups['cluster_hosts'] }}" +  when: hostvars[item]['public_v4'] is defined + +- name: "Add wildcard records to the public A records" +  set_fact: +    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': '*.' + openshift_openstack_app_subdomain, 'ip': hostvars[item]['public_v4'] } ] }}" +  with_items: "{{ groups['infra_hosts'] }}" +  when: hostvars[item]['public_v4'] is defined + +- name: "Add public master cluster hostname records to the public A records (single master)" +  set_fact: +    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(openshift_openstack_full_dns_domain, ''))[:-1], 'ip': hostvars[groups.masters[0]].public_v4 } ] }}" +  when: +    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined +    - openshift_openstack_num_masters == 1 +    - not openshift_openstack_use_bastion|bool + +- name: "Add public master cluster hostname records to the public A records (single master behind a bastion)" +  set_fact: +    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(openshift_openstack_full_dns_domain, ''))[:-1], 'ip': hostvars[groups.bastions[0]].public_v4 } ] }}" +  when: +    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined +    - openshift_openstack_num_masters == 1 +    - openshift_openstack_use_bastion|bool + +- name: "Add public master cluster hostname records to the public A records (multi-master)" +  set_fact: +    public_records: "{{ public_records | default([]) + [ { 'type': 'A', 'hostname': (hostvars[groups.masters[0]].openshift_master_cluster_public_hostname | replace(openshift_openstack_full_dns_domain, ''))[:-1], 'ip': hostvars[groups.lb[0]].public_v4 } ] }}" +  when: +    - hostvars[groups.masters[0]].openshift_master_cluster_public_hostname is defined +    - openshift_openstack_num_masters > 1 + +- name: "Set the public DNS server details to use the external value (if provided)" +  set_fact: +    nsupdate_server_public: "{{ openshift_openstack_external_nsupdate_keys['public']['server'] }}" +    nsupdate_key_secret_public: "{{ openshift_openstack_external_nsupdate_keys['public']['key_secret'] }}" +    nsupdate_key_algorithm_public: "{{ openshift_openstack_external_nsupdate_keys['public']['key_algorithm'] }}" +    nsupdate_public_key_name: "{{ openshift_openstack_external_nsupdate_keys['public']['key_name']|default('public-' + openshift_openstack_full_dns_domain) }}" +  when: +    - openshift_openstack_external_nsupdate_keys is defined +    - openshift_openstack_external_nsupdate_keys['public'] is defined + +- name: "Generate the public Add section for DNS" +  set_fact: +    public_named_records: +      - view: "public" +        zone: "{{ openshift_openstack_full_dns_domain }}" +        server: "{{ nsupdate_server_public }}" +        key_name: "{{ nsupdate_public_key_name|default('public-' + openshift_openstack_full_dns_domain) }}" +        key_secret: "{{ nsupdate_key_secret_public }}" +        key_algorithm: "{{ nsupdate_key_algorithm_public | lower }}" +        entries: "{{ public_records }}" + + +- name: "Generate the final openshift_openstack_dns_records_add" +  set_fact: +    openshift_openstack_dns_records_add: "{{ private_named_records + public_named_records }}" + + +- name: "Add DNS A records" +  nsupdate: +    key_name: "{{ item.0.key_name }}" +    key_secret: "{{ item.0.key_secret }}" +    key_algorithm: "{{ item.0.key_algorithm }}" +    server: "{{ item.0.server }}" +    zone: "{{ item.0.zone }}" +    record: "{{ item.1.hostname }}" +    value: "{{ item.1.ip }}" +    type: "{{ item.1.type }}" +    # TODO(shadower): add a cleanup playbook that removes these records, too! +    state: present +  with_subelements: +    - "{{ openshift_openstack_dns_records_add | default({}) }}" +    - entries +  register: nsupdate_add_result +  until: nsupdate_add_result|succeeded +  retries: 10 +  delay: 1 diff --git a/roles/openshift_openstack/tasks/prepare-and-format-cinder-volume.yaml b/roles/openshift_openstack/tasks/prepare-and-format-cinder-volume.yaml new file mode 100644 index 000000000..fc51f6dc2 --- /dev/null +++ b/roles/openshift_openstack/tasks/prepare-and-format-cinder-volume.yaml @@ -0,0 +1,59 @@ +--- +- name: Attach the volume to the VM +  os_server_volume: +    state: present +    server: "{{ groups['masters'][0] }}" +    volume: "{{ cinder_volume }}" +  register: volume_attachment + +- set_fact: +    attached_device: >- +      {{ volume_attachment['attachments']|json_query("[?volume_id=='" + cinder_volume + "'].device | [0]") }} + +- delegate_to: "{{ groups['masters'][0] }}" +  block: +  - name: Wait for the device to appear +    wait_for: path={{ attached_device }} + +  - name: Create a temp directory for mounting the volume +    tempfile: +      prefix: cinder-volume +      state: directory +    register: cinder_mount_dir + +  - name: Format the device +    filesystem: +      fstype: "{{ cinder_fs }}" +      dev: "{{ attached_device }}" + +  - name: Mount the device +    mount: +      name: "{{ cinder_mount_dir.path }}" +      src: "{{ attached_device }}" +      state: mounted +      fstype: "{{ cinder_fs }}" + +  - name: Change mode on the filesystem +    file: +      path: "{{ cinder_mount_dir.path }}" +      state: directory +      recurse: true +      mode: 0777 + +  - name: Unmount the device +    mount: +      name: "{{ cinder_mount_dir.path }}" +      src: "{{ attached_device }}" +      state: absent +      fstype: "{{ cinder_fs }}" + +  - name: Delete the temp directory +    file: +      name: "{{ cinder_mount_dir.path }}" +      state: absent + +- name: Detach the volume from the VM +  os_server_volume: +    state: absent +    server: "{{ groups['masters'][0] }}" +    volume: "{{ cinder_volume }}" diff --git a/roles/openshift_openstack/tasks/provision.yml b/roles/openshift_openstack/tasks/provision.yml new file mode 100644 index 000000000..dccbe334c --- /dev/null +++ b/roles/openshift_openstack/tasks/provision.yml @@ -0,0 +1,25 @@ +--- +- name: Generate the templates +  include: generate-templates.yml +  when: +  - openshift_openstack_stack_state == 'present' + +- name: Handle the Stack (create/delete) +  ignore_errors: False +  register: stack_create +  os_stack: +    name: "{{ openshift_openstack_stack_name }}" +    state: "{{ openshift_openstack_stack_state }}" +    template: "{{ stack_template_path | default(omit) }}" +    wait: yes + +- name: Add the new nodes to the inventory +  meta: refresh_inventory + +- name: CleanUp +  include: cleanup.yml +  when: +  - openshift_openstack_stack_state == 'present' + +# TODO(shadower): create the registry and PV Cinder volumes if specified +# and include the `prepare-and-format-cinder-volume` tasks to set it up diff --git a/roles/openshift_openstack/templates/docker-storage-setup-dm.j2 b/roles/openshift_openstack/templates/docker-storage-setup-dm.j2 new file mode 100644 index 000000000..32c6b5838 --- /dev/null +++ b/roles/openshift_openstack/templates/docker-storage-setup-dm.j2 @@ -0,0 +1,4 @@ +DEVS="{{ openshift_openstack_container_storage_setup.docker_dev }}" +VG="{{ openshift_openstack_container_storage_setup.docker_vg }}" +DATA_SIZE="{{ openshift_openstack_container_storage_setup.docker_data_size }}" +EXTRA_DOCKER_STORAGE_OPTIONS="--storage-opt dm.basesize={{ openshift_openstack_container_storage_setup.docker_dm_basesize }}" diff --git a/roles/openshift_openstack/templates/docker-storage-setup-overlayfs.j2 b/roles/openshift_openstack/templates/docker-storage-setup-overlayfs.j2 new file mode 100644 index 000000000..1bf366bdc --- /dev/null +++ b/roles/openshift_openstack/templates/docker-storage-setup-overlayfs.j2 @@ -0,0 +1,7 @@ +DEVS="{{ openshift_openstack_container_storage_setup.docker_dev }}" +VG="{{ openshift_openstack_container_storage_setup.docker_vg }}" +DATA_SIZE="{{ openshift_openstack_container_storage_setup.docker_data_size }}" +STORAGE_DRIVER=overlay2 +CONTAINER_ROOT_LV_NAME="{{ openshift_openstack_container_storage_setup.container_root_lv_name }}" +CONTAINER_ROOT_LV_MOUNT_PATH="{{ openshift_openstack_container_storage_setup.container_root_lv_mount_path }}" +CONTAINER_ROOT_LV_SIZE=100%FREE diff --git a/roles/openshift_openstack/templates/heat_stack.yaml.j2 b/roles/openshift_openstack/templates/heat_stack.yaml.j2 new file mode 100644 index 000000000..bfa65b460 --- /dev/null +++ b/roles/openshift_openstack/templates/heat_stack.yaml.j2 @@ -0,0 +1,888 @@ +heat_template_version: 2016-10-14 + +description: OpenShift cluster + +parameters: + +outputs: + +  etcd_names: +    description: Name of the etcds +    value: { get_attr: [ etcd, name ] } + +  etcd_ips: +    description: IPs of the etcds +    value: { get_attr: [ etcd, private_ip ] } + +  etcd_floating_ips: +    description: Floating IPs of the etcds +    value: { get_attr: [ etcd, floating_ip ] } + +  master_names: +    description: Name of the masters +    value: { get_attr: [ masters, name ] } + +  master_ips: +    description: IPs of the masters +    value: { get_attr: [ masters, private_ip ] } + +  master_floating_ips: +    description: Floating IPs of the masters +    value: { get_attr: [ masters, floating_ip ] } + +  node_names: +    description: Name of the nodes +    value: { get_attr: [ compute_nodes, name ] } + +  node_ips: +    description: IPs of the nodes +    value: { get_attr: [ compute_nodes, private_ip ] } + +  node_floating_ips: +    description: Floating IPs of the nodes +    value: { get_attr: [ compute_nodes, floating_ip ] } + +  infra_names: +    description: Name of the nodes +    value: { get_attr: [ infra_nodes, name ] } + +  infra_ips: +    description: IPs of the nodes +    value: { get_attr: [ infra_nodes, private_ip ] } + +  infra_floating_ips: +    description: Floating IPs of the nodes +    value: { get_attr: [ infra_nodes, floating_ip ] } + +{% if openshift_openstack_num_dns|int > 0 %} +  dns_name: +    description: Name of the DNS +    value: +      get_attr: +        - dns +        - name + +  dns_floating_ips: +    description: Floating IPs of the DNS +    value: { get_attr: [ dns, floating_ip ] } + +  dns_private_ips: +    description: Private IPs of the DNS +    value: { get_attr: [ dns, private_ip ] } +{% endif %} + +conditions: +  no_floating: {% if openshift_openstack_provider_network_name or openshift_openstack_use_bastion|bool %}true{% else %}false{% endif %} + +resources: + +{% if not openshift_openstack_provider_network_name %} +  net: +    type: OS::Neutron::Net +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-net +          params: +            cluster_id: {{ openshift_openstack_stack_name }} + +  subnet: +    type: OS::Neutron::Subnet +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-subnet +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      network: { get_resource: net } +      cidr: +        str_replace: +          template: subnet_24_prefix.0/24 +          params: +            subnet_24_prefix: {{ openshift_openstack_subnet_prefix }} +      allocation_pools: +        - start: +            str_replace: +              template: subnet_24_prefix.3 +              params: +                subnet_24_prefix: {{ openshift_openstack_subnet_prefix }} +          end: +            str_replace: +              template: subnet_24_prefix.254 +              params: +                subnet_24_prefix: {{ openshift_openstack_subnet_prefix }} +      dns_nameservers: +{% for nameserver in openshift_openstack_dns_nameservers %} +        - {{ nameserver }} +{% endfor %} + +{% if openshift_use_flannel|default(False)|bool %} +  data_net: +    type: OS::Neutron::Net +    properties: +      name: openshift-ansible-{{ openshift_openstack_stack_name }}-data-net +      port_security_enabled: false + +  data_subnet: +    type: OS::Neutron::Subnet +    properties: +      name: openshift-ansible-{{ openshift_openstack_stack_name }}-data-subnet +      network: { get_resource: data_net } +      cidr: {{ osm_cluster_network_cidr|default('10.128.0.0/14') }} +      gateway_ip: null +{% endif %} + +  router: +    type: OS::Neutron::Router +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-router +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      external_gateway_info: +        network: {{ openshift_openstack_external_network_name }} + +  interface: +    type: OS::Neutron::RouterInterface +    properties: +      router_id: { get_resource: router } +      subnet_id: { get_resource: subnet } + +{% endif %} + +#  keypair: +#    type: OS::Nova::KeyPair +#    properties: +#      name: +#        str_replace: +#          template: openshift-ansible-cluster_id-keypair +#          params: +#            cluster_id: {{ openshift_openstack_stack_name }} +#      public_key: {{ openshift_openstack_keypair_name }} + +  common-secgrp: +    type: OS::Neutron::SecurityGroup +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-common-secgrp +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      description: +        str_replace: +          template: Basic ssh/icmp security group for cluster_id OpenShift cluster +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      rules: +        - direction: ingress +          protocol: tcp +          port_range_min: 22 +          port_range_max: 22 +          remote_ip_prefix: {{ openshift_openstack_ssh_ingress_cidr }} +{% if openshift_openstack_use_bastion|bool %} +        - direction: ingress +          protocol: tcp +          port_range_min: 22 +          port_range_max: 22 +          remote_ip_prefix: {{ openshift_openstack_bastion_ingress_cidr }} +{% endif %} +        - direction: ingress +          protocol: icmp +          remote_ip_prefix: {{ openshift_openstack_ssh_ingress_cidr }} + +{% if openshift_openstack_flat_secgrp|default(False)|bool %} +  flat-secgrp: +    type: OS::Neutron::SecurityGroup +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-flat-secgrp +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      description: +        str_replace: +          template: Security group for cluster_id OpenShift cluster +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      rules: +        - direction: ingress +          protocol: tcp +          port_range_min: 4001 +          port_range_max: 4001 +        - direction: ingress +          protocol: tcp +          port_range_min: {{ openshift_master_api_port|default(8443) }} +          port_range_max: {{ openshift_master_api_port|default(8443) }} +        - direction: ingress +          protocol: tcp +          port_range_min: {{ openshift_master_console_port|default(8443) }} +          port_range_max: {{ openshift_master_console_port|default(8443) }} +        - direction: ingress +          protocol: tcp +          port_range_min: 8053 +          port_range_max: 8053 +        - direction: ingress +          protocol: udp +          port_range_min: 8053 +          port_range_max: 8053 +        - direction: ingress +          protocol: tcp +          port_range_min: 24224 +          port_range_max: 24224 +        - direction: ingress +          protocol: udp +          port_range_min: 24224 +          port_range_max: 24224 +        - direction: ingress +          protocol: tcp +          port_range_min: 2224 +          port_range_max: 2224 +        - direction: ingress +          protocol: udp +          port_range_min: 5404 +          port_range_max: 5405 +        - direction: ingress +          protocol: tcp +          port_range_min: 9090 +          port_range_max: 9090 +        - direction: ingress +          protocol: tcp +          port_range_min: 2379 +          port_range_max: 2380 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: tcp +          port_range_min: 10250 +          port_range_max: 10250 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: udp +          port_range_min: 10250 +          port_range_max: 10250 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: tcp +          port_range_min: 10255 +          port_range_max: 10255 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: udp +          port_range_min: 10255 +          port_range_max: 10255 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: udp +          port_range_min: 4789 +          port_range_max: 4789 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: tcp +          port_range_min: 30000 +          port_range_max: 32767 +          remote_ip_prefix: {{ openshift_openstack_node_ingress_cidr }} +        - direction: ingress +          protocol: tcp +          port_range_min: 30000 +          port_range_max: 32767 +          remote_ip_prefix: "{{ openshift_openstack_subnet_prefix }}.0/24" +{% else %} +  master-secgrp: +    type: OS::Neutron::SecurityGroup +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-master-secgrp +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      description: +        str_replace: +          template: Security group for cluster_id OpenShift cluster master +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      rules: +        - direction: ingress +          protocol: tcp +          port_range_min: 4001 +          port_range_max: 4001 +        - direction: ingress +          protocol: tcp +          port_range_min: {{ openshift_master_api_port|default(8443) }} +          port_range_max: {{ openshift_master_api_port|default(8443) }} +        - direction: ingress +          protocol: tcp +          port_range_min: {{ openshift_master_console_port|default(8443) }} +          port_range_max: {{ openshift_master_console_port|default(8443) }} +        - direction: ingress +          protocol: tcp +          port_range_min: 8053 +          port_range_max: 8053 +        - direction: ingress +          protocol: udp +          port_range_min: 8053 +          port_range_max: 8053 +        - direction: ingress +          protocol: tcp +          port_range_min: 24224 +          port_range_max: 24224 +        - direction: ingress +          protocol: udp +          port_range_min: 24224 +          port_range_max: 24224 +        - direction: ingress +          protocol: tcp +          port_range_min: 2224 +          port_range_max: 2224 +        - direction: ingress +          protocol: udp +          port_range_min: 5404 +          port_range_max: 5405 +        - direction: ingress +          protocol: tcp +          port_range_min: 9090 +          port_range_max: 9090 +{% if openshift_use_flannel|default(False)|bool %} +        - direction: ingress +          protocol: tcp +          port_range_min: 2379 +          port_range_max: 2379 +{% endif %} + +  etcd-secgrp: +    type: OS::Neutron::SecurityGroup +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-etcd-secgrp +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      description: +        str_replace: +          template: Security group for cluster_id etcd cluster +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      rules: +        - direction: ingress +          protocol: tcp +          port_range_min: 2379 +          port_range_max: 2379 +          remote_mode: remote_group_id +          remote_group_id: { get_resource: master-secgrp } +        - direction: ingress +          protocol: tcp +          port_range_min: 2380 +          port_range_max: 2380 +          remote_mode: remote_group_id + +  node-secgrp: +    type: OS::Neutron::SecurityGroup +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-node-secgrp +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      description: +        str_replace: +          template: Security group for cluster_id OpenShift cluster nodes +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      rules: +        - direction: ingress +          protocol: tcp +          port_range_min: 10250 +          port_range_max: 10250 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: tcp +          port_range_min: 10255 +          port_range_max: 10255 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: udp +          port_range_min: 10255 +          port_range_max: 10255 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: udp +          port_range_min: 4789 +          port_range_max: 4789 +          remote_mode: remote_group_id +        - direction: ingress +          protocol: tcp +          port_range_min: 30000 +          port_range_max: 32767 +          remote_ip_prefix: {{ openshift_openstack_node_ingress_cidr }} +        - direction: ingress +          protocol: tcp +          port_range_min: 30000 +          port_range_max: 32767 +          remote_ip_prefix: "{{ openshift_openstack_subnet_prefix }}.0/24" +{% endif %} + +  infra-secgrp: +    type: OS::Neutron::SecurityGroup +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-infra-secgrp +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      description: +        str_replace: +          template: Security group for cluster_id OpenShift infrastructure cluster nodes +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      rules: +        - direction: ingress +          protocol: tcp +          port_range_min: 80 +          port_range_max: 80 +        - direction: ingress +          protocol: tcp +          port_range_min: 443 +          port_range_max: 443 + +{% if openshift_openstack_num_dns|int > 0 %} +  dns-secgrp: +    type: OS::Neutron::SecurityGroup +    properties: +      name: +        str_replace: +          template: openshift-ansible-cluster_id-dns-secgrp +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      description: +        str_replace: +          template: Security group for cluster_id cluster DNS +          params: +            cluster_id: {{ openshift_openstack_stack_name }} +      rules: +        - direction: ingress +          protocol: udp +          port_range_min: 53 +          port_range_max: 53 +          remote_ip_prefix: {{ openshift_openstack_node_ingress_cidr }} +        - direction: ingress +          protocol: udp +          port_range_min: 53 +          port_range_max: 53 +          remote_ip_prefix: "{{ openshift_openstack_subnet_prefix }}.0/24" +        - direction: ingress +          protocol: tcp +          port_range_min: 53 +          port_range_max: 53 +          remote_ip_prefix: {{ openshift_openstack_node_ingress_cidr }} +        - direction: ingress +          protocol: tcp +          port_range_min: 53 +          port_range_max: 53 +          remote_ip_prefix: "{{ openshift_openstack_subnet_prefix }}.0/24" +{% endif %} + +{% if openshift_openstack_num_masters|int > 1 or openshift_openstack_ui_ssh_tunnel|bool %} +  lb-secgrp: +    type: OS::Neutron::SecurityGroup +    properties: +      name: openshift-ansible-{{ openshift_openstack_stack_name }}-lb-secgrp +      description: Security group for {{ openshift_openstack_stack_name }} cluster Load Balancer +      rules: +      - direction: ingress +        protocol: tcp +        port_range_min: {{ openshift_master_api_port | default(8443) }} +        port_range_max: {{ openshift_master_api_port | default(8443) }} +        remote_ip_prefix: {{ openshift_openstack_lb_ingress_cidr | default(openshift_openstack_bastion_ingress_cidr) }} +{% if openshift_openstack_ui_ssh_tunnel|bool %} +      - direction: ingress +        protocol: tcp +        port_range_min: {{ openshift_master_api_port | default(8443) }} +        port_range_max: {{ openshift_master_api_port | default(8443) }} +        remote_ip_prefix: {{ openshift_openstack_ssh_ingress_cidr }} +{% endif %} +{% if openshift_master_console_port is defined and openshift_master_console_port != openshift_master_api_port %} +      - direction: ingress +        protocol: tcp +        port_range_min: {{ openshift_master_console_port | default(8443) }} +        port_range_max: {{ openshift_master_console_port | default(8443) }} +        remote_ip_prefix: {{ openshift_openstack_lb_ingress_cidr | default(openshift_openstack_bastion_ingress_cidr) }} +{% endif %} +{% endif %} + +  etcd: +    type: OS::Heat::ResourceGroup +    properties: +      count: {{ openshift_openstack_num_etcd }} +      resource_def: +        type: server.yaml +        properties: +          name: +            str_replace: +              template: k8s_type-%index%.cluster_id +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +                k8s_type: {{ openshift_openstack_etcd_hostname }} +          cluster_env: {{ openshift_openstack_public_dns_domain }} +          cluster_id:  {{ openshift_openstack_stack_name }} +          group: +            str_replace: +              template: k8s_type.cluster_id +              params: +                k8s_type: etcds +                cluster_id: {{ openshift_openstack_stack_name }} +          type:        etcd +          image:       {{ openshift_openstack_etcd_image }} +          flavor:      {{ openshift_openstack_etcd_flavor }} +          key_name:    {{ openshift_openstack_keypair_name }} +{% if openshift_openstack_provider_network_name %} +          net:         {{ openshift_openstack_provider_network_name }} +          net_name:         {{ openshift_openstack_provider_network_name }} +{% else %} +          net:         { get_resource: net } +          subnet:      { get_resource: subnet } +          net_name: +            str_replace: +              template: openshift-ansible-cluster_id-net +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +{% endif %} +          secgrp: +            - { get_resource: {% if openshift_openstack_flat_secgrp|default(False)|bool %}flat-secgrp{% else %}etcd-secgrp{% endif %} } +            - { get_resource: common-secgrp } +          floating_network: +            if: +              - no_floating +              - null +              - {{ openshift_openstack_external_network_name }} +{% if openshift_openstack_use_bastion|bool or openshift_openstack_provider_network_name %} +          attach_float_net: false +{% endif %} +          volume_size: {{ openshift_openstack_etcd_volume_size }} +{% if not openshift_openstack_provider_network_name %} +    depends_on: +      - interface +{% endif %} + +{% if openshift_openstack_master_server_group_policies|length > 0 %} +  master_server_group: +    type: OS::Nova::ServerGroup +    properties: +      name: master_server_group +      policies: {{ openshift_openstack_master_server_group_policies }} +{% endif %} +{% if openshift_openstack_infra_server_group_policies|length > 0 %} +  infra_server_group: +    type: OS::Nova::ServerGroup +    properties: +      name: infra_server_group +      policies: {{ openshift_openstack_infra_server_group_policies }} +{% endif %} +{% if openshift_openstack_num_masters|int > 1 %} +  loadbalancer: +    type: OS::Heat::ResourceGroup +    properties: +      count: 1 +      resource_def: +        type: server.yaml +        properties: +          name: +            str_replace: +              template: k8s_type-%index%.cluster_id +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +                k8s_type: {{ openshift_openstack_lb_hostname }} +          cluster_env: {{ openshift_openstack_public_dns_domain }} +          cluster_id:  {{ openshift_openstack_stack_name }} +          group: +            str_replace: +              template: k8s_type.cluster_id +              params: +                k8s_type: lb +                cluster_id: {{ openshift_openstack_stack_name }} +          type:        lb +          image:       {{ openshift_openstack_lb_image }} +          flavor:      {{ openshift_openstack_lb_flavor }} +          key_name:    {{ openshift_openstack_keypair_name }} +{% if openshift_openstack_provider_network_name %} +          net:         {{ openshift_openstack_provider_network_name }} +          net_name:         {{ openshift_openstack_provider_network_name }} +{% else %} +          net:         { get_resource: net } +          subnet:      { get_resource: subnet } +          net_name: +            str_replace: +              template: openshift-ansible-cluster_id-net +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +{% endif %} +          secgrp: +            - { get_resource: lb-secgrp } +            - { get_resource: common-secgrp } +{% if not openshift_openstack_provider_network_name %} +          floating_network: {{ openshift_openstack_external_network_name }} +{% endif %} +          volume_size: {{ openshift_openstack_lb_volume_size }} +{% if not openshift_openstack_provider_network_name %} +    depends_on: +      - interface +{% endif %} +{% endif %} + +  masters: +    type: OS::Heat::ResourceGroup +    properties: +      count: {{ openshift_openstack_num_masters }} +      resource_def: +        type: server.yaml +        properties: +          name: +            str_replace: +              template: k8s_type-%index%.cluster_id +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +                k8s_type: {{ openshift_openstack_master_hostname }} +          cluster_env: {{ openshift_openstack_public_dns_domain }} +          cluster_id:  {{ openshift_openstack_stack_name }} +          group: +            str_replace: +              template: k8s_type.cluster_id +              params: +                k8s_type: masters +                cluster_id: {{ openshift_openstack_stack_name }} +          type:        master +          image:       {{ openshift_openstack_master_image }} +          flavor:      {{ openshift_openstack_master_flavor }} +          key_name:    {{ openshift_openstack_keypair_name }} +{% if openshift_openstack_provider_network_name %} +          net:         {{ openshift_openstack_provider_network_name }} +          net_name:         {{ openshift_openstack_provider_network_name }} +{% else %} +          net:         { get_resource: net } +          subnet:      { get_resource: subnet } +          net_name: +            str_replace: +              template: openshift-ansible-cluster_id-net +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +{% if openshift_use_flannel|default(False)|bool %} +          attach_data_net: true +          data_net:    { get_resource: data_net } +          data_subnet: { get_resource: data_subnet } +{% endif %} +{% endif %} +          secgrp: +{% if openshift_openstack_flat_secgrp|default(False)|bool %} +            - { get_resource: flat-secgrp } +{% else %} +            - { get_resource: master-secgrp } +            - { get_resource: node-secgrp } +{% if openshift_openstack_num_etcd|int == 0 %} +            - { get_resource: etcd-secgrp } +{% endif %} +{% endif %} +            - { get_resource: common-secgrp } +          floating_network: +            if: +              - no_floating +              - null +              - {{ openshift_openstack_external_network_name }} +{% if openshift_openstack_use_bastion|bool or openshift_openstack_provider_network_name %} +          attach_float_net: false +{% endif %} +          volume_size: {{ openshift_openstack_master_volume_size }} +{% if openshift_openstack_master_server_group_policies|length > 0 %} +          scheduler_hints: +            group: { get_resource: master_server_group } +{% endif %} +{% if not openshift_openstack_provider_network_name %} +    depends_on: +      - interface +{% endif %} + +  compute_nodes: +    type: OS::Heat::ResourceGroup +    properties: +      count: {{ openshift_openstack_num_nodes }} +      removal_policies: +      - resource_list: {{ openshift_openstack_nodes_to_remove }} +      resource_def: +        type: server.yaml +        properties: +          name: +            str_replace: +              template: sub_type_k8s_type-%index%.cluster_id +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +                sub_type_k8s_type: {{ openshift_openstack_node_hostname }} +          cluster_env: {{ openshift_openstack_public_dns_domain }} +          cluster_id:  {{ openshift_openstack_stack_name }} +          group: +            str_replace: +              template: k8s_type.cluster_id +              params: +                k8s_type: nodes +                cluster_id: {{ openshift_openstack_stack_name }} +          type:        node +          subtype:     app +          node_labels: +{% for k, v in openshift_openstack_cluster_node_labels.app.iteritems() %} +            {{ k|e }}: {{ v|e }} +{% endfor %} +          image:       {{ openshift_openstack_node_image }} +          flavor:      {{ openshift_openstack_node_flavor }} +          key_name:    {{ openshift_openstack_keypair_name }} +{% if openshift_openstack_provider_network_name %} +          net:         {{ openshift_openstack_provider_network_name }} +          net_name:         {{ openshift_openstack_provider_network_name }} +{% else %} +          net:         { get_resource: net } +          subnet:      { get_resource: subnet } +          net_name: +            str_replace: +              template: openshift-ansible-cluster_id-net +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +{% if openshift_use_flannel|default(False)|bool %} +          attach_data_net: true +          data_net:    { get_resource: data_net } +          data_subnet: { get_resource: data_subnet } +{% endif %} +{% endif %} +          secgrp: +            - { get_resource: {% if openshift_openstack_flat_secgrp|default(False)|bool %}flat-secgrp{% else %}node-secgrp{% endif %} } +            - { get_resource: common-secgrp } +          floating_network: +            if: +              - no_floating +              - null +              - {{ openshift_openstack_external_network_name }} +{% if openshift_openstack_use_bastion|bool or openshift_openstack_provider_network_name %} +          attach_float_net: false +{% endif %} +          volume_size: {{ openshift_openstack_node_volume_size }} +{% if not openshift_openstack_provider_network_name %} +    depends_on: +      - interface +{% endif %} + +  infra_nodes: +    type: OS::Heat::ResourceGroup +    properties: +      count: {{ openshift_openstack_num_infra }} +      resource_def: +        type: server.yaml +        properties: +          name: +            str_replace: +              template: sub_type_k8s_type-%index%.cluster_id +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +                sub_type_k8s_type: {{ openshift_openstack_infra_hostname }} +          cluster_env: {{ openshift_openstack_public_dns_domain }} +          cluster_id:  {{ openshift_openstack_stack_name }} +          group: +            str_replace: +              template: k8s_type.cluster_id +              params: +                k8s_type: infra +                cluster_id: {{ openshift_openstack_stack_name }} +          type:        node +          subtype:     infra +          node_labels: +{% for k, v in openshift_openstack_cluster_node_labels.infra.iteritems() %} +            {{ k|e }}: {{ v|e }} +{% endfor %} +          image:       {{ openshift_openstack_infra_image }} +          flavor:      {{ openshift_openstack_infra_flavor }} +          key_name:    {{ openshift_openstack_keypair_name }} +{% if openshift_openstack_provider_network_name %} +          net:         {{ openshift_openstack_provider_network_name }} +          net_name:         {{ openshift_openstack_provider_network_name }} +{% else %} +          net:         { get_resource: net } +          subnet:      { get_resource: subnet } +          net_name: +            str_replace: +              template: openshift-ansible-cluster_id-net +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +{% if openshift_use_flannel|default(False)|bool %} +          attach_data_net: true +          data_net:    { get_resource: data_net } +          data_subnet: { get_resource: data_subnet } +{% endif %} +{% endif %} +          secgrp: +# TODO(bogdando) filter only required node rules into infra-secgrp +{% if openshift_openstack_flat_secgrp|default(False)|bool %} +            - { get_resource: flat-secgrp } +{% else %} +            - { get_resource: node-secgrp } +{% endif %} +{% if openshift_openstack_ui_ssh_tunnel|bool and openshift_openstack_num_masters|int < 2 %} +            - { get_resource: lb-secgrp } +{% endif %} +            - { get_resource: infra-secgrp } +            - { get_resource: common-secgrp } +{% if not openshift_openstack_provider_network_name %} +          floating_network: {{ openshift_openstack_external_network_name }} +{% endif %} +          volume_size: {{ openshift_openstack_infra_volume_size }} +{% if openshift_openstack_infra_server_group_policies|length > 0 %} +          scheduler_hints: +            group: { get_resource: infra_server_group } +{% endif %} +{% if not openshift_openstack_provider_network_name %} +    depends_on: +      - interface +{% endif %} + +{% if openshift_openstack_num_dns|int > 0 %} +  dns: +    type: OS::Heat::ResourceGroup +    properties: +      count: {{ openshift_openstack_num_dns }} +      resource_def: +        type: server.yaml +        properties: +          name: +            str_replace: +              template: k8s_type-%index%.cluster_id +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +                k8s_type: {{ openshift_openstack_dns_hostname }} +          cluster_env: {{ openshift_openstack_public_dns_domain }} +          cluster_id:  {{ openshift_openstack_stack_name }} +          group: +            str_replace: +              template: k8s_type.cluster_id +              params: +                k8s_type: dns +                cluster_id: {{ openshift_openstack_stack_name }} +          type:        dns +          image:       {{ openshift_openstack_dns_image }} +          flavor:      {{ openshift_openstack_dns_flavor }} +          key_name:    {{ openshift_openstack_keypair_name }} +{% if openshift_openstack_provider_network_name %} +          net:         {{ openshift_openstack_provider_network_name }} +          net_name:         {{ openshift_openstack_provider_network_name }} +{% else %} +          net:         { get_resource: net } +          subnet:      { get_resource: subnet } +          net_name: +            str_replace: +              template: openshift-ansible-cluster_id-net +              params: +                cluster_id: {{ openshift_openstack_stack_name }} +{% endif %} +          secgrp: +            - { get_resource: dns-secgrp } +            - { get_resource: common-secgrp } +{% if not openshift_openstack_provider_network_name %} +          floating_network: {{ openshift_openstack_external_network_name }} +{% endif %} +          volume_size: {{ openshift_openstack_dns_volume_size }} +{% if not openshift_openstack_provider_network_name %} +    depends_on: +      - interface +{% endif %} +{% endif %} diff --git a/roles/openshift_openstack/templates/heat_stack_server.yaml.j2 b/roles/openshift_openstack/templates/heat_stack_server.yaml.j2 new file mode 100644 index 000000000..a829da34f --- /dev/null +++ b/roles/openshift_openstack/templates/heat_stack_server.yaml.j2 @@ -0,0 +1,270 @@ +heat_template_version: 2016-10-14 + +description: OpenShift cluster server + +parameters: + +  name: +    type: string +    label: Name +    description: Name + +  group: +    type: string +    label: Host Group +    description: The Primary Ansible Host Group +    default: host + +  cluster_env: +    type: string +    label: Cluster environment +    description: Environment of the cluster + +  cluster_id: +    type: string +    label: Cluster ID +    description: Identifier of the cluster + +  type: +    type: string +    label: Type +    description: Type master or node + +  subtype: +    type: string +    label: Sub-type +    description: Sub-type compute or infra for nodes, default otherwise +    default: default + +  key_name: +    type: string +    label: Key name +    description: Key name of keypair + +  image: +    type: string +    label: Image +    description: Name of the image + +  flavor: +    type: string +    label: Flavor +    description: Name of the flavor + +  net: +    type: string +    label: Net ID +    description: Net resource + +  net_name: +    type: string +    label: Net name +    description: Net name + +{% if not openshift_openstack_provider_network_name %} +  subnet: +    type: string +    label: Subnet ID +    description: Subnet resource +{% endif %} + +{% if openshift_use_flannel|default(False)|bool %} +  attach_data_net: +    type: boolean +    default: false +    label: Attach-data-net +    description: A switch for data port connection + +  data_net: +    type: string +    default: '' +    label: Net ID +    description: Net resource + +{% if not openshift_openstack_provider_network_name %} +  data_subnet: +    type: string +    default: '' +    label: Subnet ID +    description: Subnet resource +{% endif %} +{% endif %} + +  secgrp: +    type: comma_delimited_list +    label: Security groups +    description: Security group resources + +  attach_float_net: +    type: boolean +    default: true + +    label: Attach-float-net +    description: A switch for floating network port connection + +{% if not openshift_openstack_provider_network_name %} +  floating_network: +    type: string +    default: '' +    label: Floating network +    description: Network to allocate floating IP from +{% endif %} + +  availability_zone: +    type: string +    description: The Availability Zone to launch the instance. +    default: nova + +  volume_size: +    type: number +    description: Size of the volume to be created. +    default: 1 +    constraints: +      - range: { min: 1, max: 1024 } +        description: must be between 1 and 1024 Gb. + +  node_labels: +    type: json +    description: OpenShift Node Labels +    default: {"region": "default" } + +  scheduler_hints: +    type: json +    description: Server scheduler hints. +    default: {} + +outputs: + +  name: +    description: Name of the server +    value: { get_attr: [ server, name ] } + +  private_ip: +    description: Private IP of the server +    value: +      get_attr: +        - server +        - addresses +        - { get_param: net_name } +        - 0 +        - addr + +  floating_ip: +    description: Floating IP of the server +    value: +      get_attr: +        - server +        - addresses +        - { get_param: net_name } +{% if openshift_openstack_provider_network_name %} +        - 0 +{% else %} +        - 1 +{% endif %} +        - addr + +conditions: +  no_floating: {not: { get_param: attach_float_net} } +{% if openshift_use_flannel|default(False)|bool %} +  no_data_subnet: {not: { get_param: attach_data_net} } +{% endif %} + +resources: + +  server: +    type: OS::Nova::Server +    properties: +      name:      { get_param: name } +      key_name:  { get_param: key_name } +      image:     { get_param: image } +      flavor:    { get_param: flavor } +      networks: +{% if openshift_use_flannel|default(False)|bool %} +        if: +          - no_data_subnet +{% if use_trunk_ports|default(false)|bool %} +          - - port:  { get_attr: [trunk-port, port_id] } +{% else %} +          - - port:  { get_resource: port } +{% endif %} +{% if use_trunk_ports|default(false)|bool %} +          - - port:  { get_attr: [trunk-port, port_id] } +{% else %} +          - - port:  { get_resource: port } +            - port:  { get_resource: data_port } +{% endif %} + +{% else %} +{% if use_trunk_ports|default(false)|bool %} +        - port:  { get_attr: [trunk-port, port_id] } +{% else %} +        - port:  { get_resource: port } +{% endif %} +{% endif %} +      user_data: +        get_file: user-data +      user_data_format: RAW +      user_data_update_policy: IGNORE +      metadata: +        group: { get_param: group } +        environment: { get_param: cluster_env } +        clusterid: { get_param: cluster_id } +        host-type: { get_param: type } +        sub-host-type:    { get_param: subtype } +        node_labels: { get_param: node_labels } +      scheduler_hints: { get_param: scheduler_hints } + +{% if use_trunk_ports|default(false)|bool %} +  trunk-port: +    type: OS::Neutron::Trunk +    properties: +      name: { get_param: name } +      port: { get_resource: port } +{% endif %} + +  port: +    type: OS::Neutron::Port +    properties: +      network: { get_param: net } +{% if not openshift_openstack_provider_network_name %} +      fixed_ips: +        - subnet: { get_param: subnet } +{% endif %} +      security_groups: { get_param: secgrp } + +{% if openshift_use_flannel|default(False)|bool %} +  data_port: +    type: OS::Neutron::Port +    condition: { not: no_data_subnet } +    properties: +      network: { get_param: data_net } +      port_security_enabled: false +{% if not openshift_openstack_provider_network_name %} +      fixed_ips: +        - subnet: { get_param: data_subnet } +{% endif %} +{% endif %} + +{% if not openshift_openstack_provider_network_name %} +  floating-ip: +    condition: { not: no_floating } +    type: OS::Neutron::FloatingIP +    properties: +      floating_network: { get_param: floating_network } +      port_id: { get_resource: port } +{% endif %} + +{% if not openshift_openstack_ephemeral_volumes|default(false)|bool %} +  cinder_volume: +    type: OS::Cinder::Volume +    properties: +      size: { get_param: volume_size } +      availability_zone: { get_param: availability_zone } + +  volume_attachment: +    type: OS::Cinder::VolumeAttachment +    properties: +      volume_id: { get_resource: cinder_volume } +      instance_uuid: { get_resource: server } +      mountpoint: /dev/sdb +{% endif %} diff --git a/roles/openshift_openstack/templates/user_data.j2 b/roles/openshift_openstack/templates/user_data.j2 new file mode 100644 index 000000000..eb65f7cec --- /dev/null +++ b/roles/openshift_openstack/templates/user_data.j2 @@ -0,0 +1,13 @@ +#cloud-config +disable_root: true + +system_info: +  default_user: +    name: openshift +    sudo: ["ALL=(ALL) NOPASSWD: ALL"] + +write_files: +  - path: /etc/sudoers.d/00-openshift-no-requiretty +    permissions: 440 +    content: | +      Defaults:openshift !requiretty  | 
