commit | eea87bbb6aa668bdf6f824e0796bcdac72c0c7ef | [log] [tgz] |
---|---|---|
author | Jan Kundrát <jan.kundrat@cesnet.cz> | Tue Feb 26 19:21:59 2019 +0100 |
committer | Jan Kundrát <jan.kundrat@cesnet.cz> | Tue Feb 26 19:21:59 2019 +0100 |
tree | 88711d8e40a4d09a4e40ea4b6d60a593ac6b9a2b | |
parent | 5403707d6ee44af2433dfbb0f7d7675aec13315f [diff] |
try to oversubscribe the runc nodes It seems that a node request gets pinned to a particular pool right when the node is initially requested. This leads to quite some idle times when all requests for a couple of jobs "land" against a single runc hypervisor. Because we had `max-servers: 1`, this led to jobs waiting for their nodes. This change is not perfect, but given that the build scripts of essentially anything we're building are crap, there's a chance that oversubscription can actually work. Let's give it a try... Change-Id: I6382fbfcad8eee03bff8119c94b00104f3c5fd6f
This is what is currently powering the CI infrastructure tied to our Gerrit. It's mostly about Zuul v3 with Nodepool, log storage, etc.
Note that some pieces (Gerrit itself in particular) are still deployed via Puppet for legacy reasons. That configuration is internal.
# Example: provision the Zuul server ansible-playbook -i production site.yml -l zuul-server