| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
| |
* Configure fluentd to aggragate container logs
|
| |
|
|
|
|
|
| |
* Create OpenShift Docker Registry
* Create OpenShift router
|
| |
|
|
|
|
|
| |
Query libvirt’s DHCP leases rather than inspecting the host’s ARP cache
to find the VMs’ IPs.
|
| |
|
|
|
|
|
| |
- Fix bug where playbooks/byo/config.yml would error if only a master is
defined in the inventory.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Do not attempt to fetch file to same file location when playbooks are run
locally on master
- Fix for openshift_facts when run against a host in a VPC that does not assign internal/external hostnames or ips
- Fix setting of labels and annotations on node instances and in
openshift_facts
- converted openshift_facts to use json for local_fact storage instead of
an ini file, included code that should migrate existing ini users to json
- added region/zone setting to byo inventory
- Fix fact related bug where deployment_type was being set on node role
instead of common role for node hosts
|
|
|
|
|
|
|
|
| |
- Add Vagrantfile for configuring a basic cluster
- Add an initial readme for using vagrant
- explicitly set connection: local and sudo: false for localhost actions in
playbooks/common/openshift-node/config.yml
- Fix permissions issue with openshift config file for non-root user
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Create a separate docker volume in aws openshift-cluster playbooks
- default to using ephemeral storage, but allow to be overriden
- allow root volume settingsto be overriden as well
- add user-data cloud-config to bootstrap the installation/configuration of
docker-storage-setup
- pylint cleanup for oo_filters.py
- remove left over traces to the deployment_type tags which were previously
removed
- oo_get_deployment_type_from_groups filter in oo_filters.py
- cluster list playbooks references to oo_get_deployment_type_from_groups
filter
|
|
|
|
|
|
|
|
|
|
|
|
| |
- users can now override the deployment_vars variables with the assocated
ec2_* variables
- added deployment_type and env specific vars files that load some ec2_*
overrides
- added the ability to search for amis by ami_name
- this allows us to specify a base name with a wildcard to have the playbook
choose the latest available image for that image name
- added a copy of the ec2_find_ami module that will be in ansible 2.0 until
we can make ansible 2.0 a requirement.
|
| |
|
|\
| |
| | |
aws terminate playbook improvements
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Reduce duplication in terminate playbooks between openshift-master and
openshift-node (they both now just include playbooks/aws/terminate.yml
- update openshift-cluster terminate playbook to include the new shared
terminate playbook, also delete all cluster hosts at once instead of
treating masters and nodes differently.
- remove env, host-type and env-host-type tags from instance before
terminating (since most users can't terminate, we are mostly just renaming
instances to -terminate and stopping them, so this prevents "terminated" hosts
from being returned by the dynamic inventory, at least after the cache is
refreshed)
|
|\ \
| | |
| | | |
add vpc support to ec2 cluster, add more overrides for variables
|
| |/ |
|
|\ \
| | |
| | | |
Fix common node config playbook when ansible is run on the first master
|
| |/ |
|
|\ \
| | |
| | | |
Todo for sync
|
| |/ |
|
|/ |
|
|\
| |
| | |
Massive refactor, deployment-type support, config updates, reduce duplication
|
| | |
|
| |\
| | |
| | | |
Move `virsh pool-refresh`
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The `pool-refresh` command is used to ask libvirt to rescan the content of a volume pool.
This is used to make `libvirt` take into account volumes that were created outside of livirt control
i.e.: not with a `virsh` command.
`pool-refresh` is useless after a `pool-create` as the content is scanned at creation.
`pool-refresh` is mandatory after having created files inside an existing pool.
|
| |\ \
| | | |
| | | | |
Make the error message checks locale proof
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
On a computer which has a locale set, the error messages look like this:
```
$ virsh net-info foo
erreur :impossible de récupérer le réseau « foo »
erreur :Réseau non trouvé : no network with matching name 'foo'
```
```
$ virsh pool-info foo
erreur :impossible de récupérer le pool « foo »
erreur :Pool de stockage introuvable : no storage pool with matching name 'foo'
```
The classical way to make those tests locale proof is to force a given locale.
Like this:
```
$ LANG=POSIX virsh net-info foo
error: failed to get network 'foo'
error: Réseau non trouvé : no network with matching name 'foo'
```
```
$ LANG=POSIX virsh pool-info foo
error: failed to get pool 'foo'
error: Pool de stockage introuvable : no storage pool with matching name 'foo'
```
It looks like the "Network not found" or "Storage pool not found" parts of the message
are generated by the `libvirtd` daemon and are not subject to the locale of the `virsh`
client.
The clean fix consists in patching `libvirt` so that `virsh` sends its locale to the
`libvirtd` daemon.
But in the mean time, it is safer to have our playbook match the part of the message
which is not subject to the daemon locale.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
According to https://libvirt.org/formatdomain.html#elementsMetadata , the `metadata` tag can contain only one top-level element per namespace.
Because of that, libvirt stored only the `deployment-type-{{ deployment_type }}` tag.
As a consequence, the dynamic inventory reported no `env-{{ cluster }}` group.
This is problematic for the `terminate.yml` playbook which iterates over `groups['tag-env-{{ cluster-id }}]`
The symptom is that `oo_hosts_to_terminate` was not defined.
In the end, as Ansible couldn’t iterate on the value of `groups['oo_hosts_to_terminate']`, it iterated on its letters:
```
TASK: [Destroy VMs] ***********************************************************
failed: [localhost] => (item=['g', 'destroy']) => {"failed": true, "item": ["g", "destroy"]}
msg: virtual machine g not found
failed: [localhost] => (item=['g', 'undefine']) => {"failed": true, "item": ["g", "undefine"]}
msg: virtual machine g not found
failed: [localhost] => (item=['r', 'destroy']) => {"failed": true, "item": ["r", "destroy"]}
msg: virtual machine r not found
failed: [localhost] => (item=['r', 'undefine']) => {"failed": true, "item": ["r", "undefine"]}
msg: virtual machine r not found
failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]}
msg: virtual machine o not found
failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]}
msg: virtual machine o not found
failed: [localhost] => (item=['u', 'destroy']) => {"failed": true, "item": ["u", "destroy"]}
msg: virtual machine u not found
failed: [localhost] => (item=['u', 'undefine']) => {"failed": true, "item": ["u", "undefine"]}
msg: virtual machine u not found
failed: [localhost] => (item=['p', 'destroy']) => {"failed": true, "item": ["p", "destroy"]}
msg: virtual machine p not found
failed: [localhost] => (item=['p', 'undefine']) => {"failed": true, "item": ["p", "undefine"]}
msg: virtual machine p not found
failed: [localhost] => (item=['s', 'destroy']) => {"failed": true, "item": ["s", "destroy"]}
msg: virtual machine s not found
failed: [localhost] => (item=['s', 'undefine']) => {"failed": true, "item": ["s", "undefine"]}
msg: virtual machine s not found
failed: [localhost] => (item=['[', 'destroy']) => {"failed": true, "item": ["[", "destroy"]}
msg: virtual machine [ not found
failed: [localhost] => (item=['[', 'undefine']) => {"failed": true, "item": ["[", "undefine"]}
msg: virtual machine [ not found
failed: [localhost] => (item=["'", 'destroy']) => {"failed": true, "item": ["'", "destroy"]}
msg: virtual machine ' not found
failed: [localhost] => (item=["'", 'undefine']) => {"failed": true, "item": ["'", "undefine"]}
msg: virtual machine ' not found
failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]}
msg: virtual machine o not found
failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]}
msg: virtual machine o not found
etc…
```
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Configuration updates for latest builds
- Switch to using create-node-config
- Switch sdn services to use etcd over SSL
- This re-uses the client certificate deployed on each node
- Additional node registration changes
- Do not assume that metadata service is available in openshift_facts module
- Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node
- Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks
- Start moving generated configs to /etc/openshift
- Some custom module cleanup
- Add known issue with ansible-1.9 to README_OSE.md
- Update to genericize the kubernetes_register_node module
- Default to use kubectl for commands
- Allow for overriding kubectl_cmd
- In openshift_register_node role, override kubectl_cmd to openshift_kube
- Set default openshift_registry_url for enterprise when deployment_type is enterprise
- Fix openshift_register_node for client config change
- Ensure that master certs directory is created
- Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node
- Allow non-root user with sudo nopasswd access
- Updates for README_OSE.md
- Update byo inventory for adding additional comments
- Updates for node cert/config sync to work with non-root user using sudo
- Move node config/certs to /etc/openshift/node
- Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154
Create common playbooks
- create common/openshift-master/config.yml
- create common/openshift-node/config.yml
- update playbooks to use new common playbooks
- update launch playbooks to call update playbooks
- fix openshift_registry and openshift_node_ip usage
Set default deployment type to origin
- openshift_repo updates for enabling origin deployments
- also separate repo and gpgkey file structure
- remove kubernetes repo since it isn't currently needed
- full deployment type support for bin/cluster
- honor OS_DEPLOYMENT_TYPE env variable
- add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set
- if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to
origin installs
Additional changes:
- Add separate config action to bin/cluster that runs ansible config but does
not update packages
- Some more duplication reduction in cluster playbooks.
- Rename task files in playbooks dirs to have tasks in their name for clarity.
- update aws/gce scripts to use a directory for inventory (otherwise when
there are no hosts returned from dynamic inventory there is an error)
libvirt refactor and update
- add libvirt dynamic inventory
- updates to use dynamic inventory for libvirt
|
|/ |
|
|\
| |
| | |
Launch openshift on AWS issues
|
| |
| |
| | |
Make security group an environment variable with default to ‘public’
|
|/ |
|
|
|
|
| |
- cleans up repo root a bit
|
|
|
|
|
|
|
|
| |
- added byo playbooks
- added byo (example) inventory
- added a README_OSE.md for getting started with Enterprise deployments
- Added an ansible.cfg as an example for configuration helpful for
playbooks/roles
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Add openshift_facts role and module
- Created new role openshift_facts that contains an openshift_facts module
- Refactor openshift_* roles to use openshift_facts instead of relying on
defaults
- Refactor playbooks to use openshift_facts
- Cleanup inventory group_vars
- Update defaults
- update openshift_master role firewall defaults
- remove etcd peer port, since we will not be supporting clustered embedded
etcd
- remove 8444 since console now runs on the api port by default
- add 8444 and 7001 to disabled services to ensure removal if updating
- Add new role os_env_extras_node that is a subset of the docker role
- previously, we were starting/enabling docker which was causing issues with some
installations
- Does not install or start docker, since the openshift-node role will
handle that for us
- Only adds root to the dockerroot group
- Update playbooks to use ops_env_extras_node role instead of docker role
- os_firewall bug fixes
- ignore ip6tables for now, since we are not configuring any ipv6 rules
- if installing package do a daemon-reload before starting/enabling service
- Add aws support to bin/cluster
- Add list action to bin/cluster
- Add update action to bin/cluster
- cleanup some stray debug statements
- some variable renaming for clarity
|
|
|
|
| |
problem triggers. Also added oo_flatten to filters for arrays of arrays.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
- Rename repos role to openshift_repos
- Make openshift_repos a dependency of openshift_common
- Add README and metadata for openshift_repos
- Playbook updates for role rename
- Verify libselinux-python is installed, otherwise some of the bulit-in
modules we use fail
|
| |
|
|
|
|
|
|
|
| |
- Does not install or start docker, since the openshift-node role will handle
that for us
- Only add root to the dockerroot group and configures the enter-container
script.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- don't use set_fact on localhost for openshift_master_ips and
openshift_master_public_ips
- we are only using it for the configure play
- move definition to vars section of configure play
- otherwise we'd have to set openshift_master_ips and
openshift_master_public_ips from hostvars['localhost'] and since we aren't
refrerencing it anywhere else, might as well just do it in vars instead of
set_fact on locahost.
|