try to oversubscribe the runc nodes

It seems that a node request gets pinned to a particular pool right when
the node is initially requested. This leads to quite some idle times
when all requests for a couple of jobs "land" against a single runc
hypervisor. Because we had `max-servers: 1`, this led to jobs waiting
for their nodes.

This change is not perfect, but given that the build scripts of
essentially anything we're building are crap, there's a chance that
oversubscription can actually work. Let's give it a try...

Change-Id: I6382fbfcad8eee03bff8119c94b00104f3c5fd6f
1 file changed
tree: 88711d8e40a4d09a4e40ea4b6d60a593ac6b9a2b
  1. README.md
  2. ansible.cfg
  3. files/
  4. group_vars/
  5. production
  6. requirements.yml
  7. roles/
  8. site.yml
README.md

Continuous Integration (CI) Setup via Ansible

This is what is currently powering the CI infrastructure tied to our Gerrit. It's mostly about Zuul v3 with Nodepool, log storage, etc.

Note that some pieces (Gerrit itself in particular) are still deployed via Puppet for legacy reasons. That configuration is internal.

# Example: provision the Zuul server
ansible-playbook -i production site.yml -l zuul-server