| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
When using a bastion and a single master, add the bastion node's public IP the public master's IP for the DNS record.
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* scale-up: playbook for upscaling app nodes
* scale-up: removed debug
* scale-up: made suggested changes
* scale-up: indentation fix
* upscaling: process split into two playbooks that are executed by a bash script
- upscaling_run.sh: bash script, usage displayed using -h parameter
- upscaling_pre-tasks: check that new value is higher, change inventory variable
- upscaling_scale-up: rerun provisioning and installation, verify change
* upscaling_run: fixed openshift-ansible-contrib directory name
* upscaling_run: inventory can be entered as relative path
* upscaling_scale-up: fixed formatting
* upscaling: minor changes
* upscaling: moved to .../provisioning/openstack directory, README updated, minor changes made
* README: minor changes
* README: formatting
* uspcaling: minor fix
* upscaling: fix
* upscaling: added customisations, fixes
- openshift-ansible-contrib and openshift-ansible paths are customisable
- fixed implicit incrementation by 1
* upscaling: fixes
* upscaling: fixes
* upscaling: another fix
* upscaling: another fix
* upscaling: fix
* upscaling: back to a single playbook, README updated
* minor fix
* pre_tasks: added labels for autoscaling
* scale-up: fixes
* scale-up: fixed host variables, post-verification is only based on labels
* scale-up: added openshift-ansible path customisation
- path has to be absolute, cannot contain '/' at the end
* scale-up: fix
* scale-up: debug removed
* README: added docs on openshift_ansible_dir, note about bastion
* static_inventory: newly added nodes are added to new_nodes group
- note: re-running provisioning fails when trying to install docker
* removing new line
* scale-up: running byo/config.yml or scaleup.yml based on the situation
- (whether there is an existing deployment or not)
* openstack.yml: indentation fix
* added refresh inventory
* upscaling: new_nodes only contains new does, it is not used during the first deployment
* static_inventory: make sure that new nodes end up only in their new_nodes group
* bug fixes
* another fix
* fixed condition
* scale-up, static_inventory role: all app node data gathered before provisioning
* upscaling: bug fixes
* upscaling: another fixes
* fixes
* upscaling: fix
* upscaling: fix
* upscaling: another logic fix
* bug fix for non-scaling deployments
|
|
|
|
|
|
| |
* Clean up cluster definition
* Changed disk sizes for 3.6
|
| |
|
|
|
| |
Lowering required permissions for iam role
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Set Ansible version in openstack CI for 2.3
Ansible 2.4 just came out and it breaks our playbooks. Let's pin the end
to end CI version to 2.3 until we've figured it out.
* Only use the deployed DNS for validation
I think the openshift installation and teardown is broken now that the
public DNS disables recursion. So we'll only use it for the validation
steps and than turn it back off.
* Use bash trap to clean up the DNS
* Actually display the commit message
* Use openshift-ansible-3.6.22-1 in openstack CI
The commit following this tag is broken for us.
|
|
|
|
|
|
|
|
|
| |
* Add Keycloak/SSO Support
* Make sure sso install occurs after ocp is done
* Add sso/keycloak to 3.6 ha ref arch
* switch to localhost for initial part of setup-sso.yml
* Change restart after sso
* Change to same password and switch to upstream repo
|
|
|
|
|
|
| |
* Better documentation
* Grammar fixes
|
|
|
|
|
|
|
|
| |
* Document using a Docker image for Ansible host
* Fix the markdown url syntax
* Mention keystonerc as well
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Make `openstack_private_ssh_key` optional
Before this, the deployer could not reasonably rely on their own SSH
configuration or e.g. using the `--private-key` option to
ansible-playbook because we always wrote the `ansible_private_key_file`
value in the static inventory.
This change makes the `openstack_private_ssh_key` variable truly
optional: if it's not set, the static inventory will not configure the
SSH key and will just rely on the existing configuration.
* Update the openstack e2e CI
It no longer sets the SSH keys explicitly -- which should just work with
the previous commit.
* Put back the `openstack_ssh_public_key` in CI
This is the option we actually need to keep. This sholud fix the CI
failures.
|
|
|
|
|
|
| |
* Parametrise openshift-emptydir-quota role
* Fix scaling up for 3.6 and RHEL
|
|
|
|
|
|
|
|
| |
* Use ansible installer role for setting the node local quota
Try to sort the openshift vars in a better way
* There is now only one google-compute-engine package
|
| |
|
| |
|
|\
| |
| | |
Make rhsm registry optional for openstack
|
| |
| |
| |
| | |
It is now commented out since it's no longer necessary.
|
|/
|
|
|
| |
This was a regression -- it used to be optional (defaulting to False),
but among some changes we ended up requiring it again.
|
|\
| |
| | |
Clear the previous inventory during provisioning
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If there was a left-over inventory from a previous run that had nodes
which were subsequently removed, these would still show up in the
Ansible's in-memory inventory and Ansible would fail trying to connect
to them.
This is because Ansible automatically loads the `inventory/hosts` file
if it exists and even if we overwrite it later, every node and group
still remains in the memory.
By removing the inventory file and and calling the `refresh_inventory`
meta task, we make sure that any left-over values are removed.
|
|\ \
| |/
|/| |
Pre-create a Cinder registry volume
|
| |
| |
| |
| |
| |
| | |
Deployments without the cinder registry would fail, because the
`cinder_registry_volume` variable is still set even when we don't
actually create the volume.
|
|/ |
|
|
|
|
|
|
|
|
| |
* modify readme
* update readme
* remove fs from config and add to instal
|
|
|
|
|
|
| |
* modify readme
* update readme
|
| |
|
|
|
|
|
|
| |
* Add ability to support custom api and console ports
* Missed an ingress rule
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Attach and detach a volume, wait for it to be accessible
This is mostly just handling the attach/detach code, making sure the necessary
vars are accessible where they need to be as well as finding out the correct
device name the volume is attached as.
* Create temp directory for mounts, remove some debug info
* add the fs actions
* Remove debug
* Prepare the volume automatically if possible
* Add docs and sample inventory
* Read OS_* creds from shell in sample inventory
* Fix yamlint complaint
* Update readme
This mentions the potential pitfalls when using devstack.
* Better check for the router deployment in CI
* Set the openshift_hoster*_wait vars to True
* Fix typo
|
|
|
|
|
|
| |
* Add ssd storage class by default
* Increase minimum number of app nodes to 3
|
|
|
|
|
|
|
|
| |
This ensures that the ports that the servers were using before this
commit will be parent ports of Neutron trunk ports. Thanks to this,
there can be nested Neutron ports inside the OS::NOva::Server resources
created either in the heat stack or dynamically inside the Instances.
Signed-off-by: Antoni Segura Puimedon <antonisp@celebdor.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* update ds var name
* update var for conditional check
* fix tags for playbook use
* remove pw from inventory ini
* lint issues
* updating setup_ansible.sh
* updating README
|
|\
| |
| | |
add etcd str role
|
| | |
|
|/ |
|
|\
| |
| | |
auto create the storage class based on vmware_datacenter
|
| | |
|
|\|
| |
| | |
Vmw 3.6
|
| | |
|
| |\
| | |
| | | |
Adds automated setup for 3.6 vmware cloud provider
|
| | | |
|
| | |\
| | | |
| | | |
| | | | |
into vmw-3.6
|