Merge "Update sphinx jobs to use python3" into feature/zuulv3
diff --git a/doc/source/admin/components.rst b/doc/source/admin/components.rst
index b3c2e44..3bec28a 100644
--- a/doc/source/admin/components.rst
+++ b/doc/source/admin/components.rst
@@ -224,6 +224,11 @@
.. attr:: scheduler
+ .. attr:: command_socket
+ :default: /var/lib/zuul/scheduler.socket
+
+ Path to command socket file for the scheduler process.
+
.. attr:: tenant_config
:required:
@@ -282,6 +287,11 @@
.. attr:: merger
+ .. attr:: command_socket
+ :default: /var/lib/zuul/merger.socket
+
+ Path to command socket file for the merger process.
+
.. attr:: git_dir
Directory in which Zuul should clone git repositories.
@@ -392,6 +402,11 @@
.. attr:: executor
+ .. attr:: command_socket
+ :default: /var/lib/zuul/executor.socket
+
+ Path to command socket file for the executor process.
+
.. attr:: finger_port
:default: 79
@@ -612,3 +627,65 @@
To start the web server, run ``zuul-web``. To stop it, kill the
PID which was saved in the pidfile specified in the configuration.
+
+Finger Gateway
+--------------
+
+The Zuul finger gateway listens on the standard finger port (79) for
+finger requests specifying a build UUID for which it should stream log
+results. The gateway will determine which executor is currently running that
+build and query that executor for the log stream.
+
+This is intended to be used with the standard finger command line client.
+For example::
+
+ finger UUID@zuul.example.com
+
+The above would stream the logs for the build identified by `UUID`.
+
+Configuration
+~~~~~~~~~~~~~
+
+In addition to the common configuration sections, the following
+sections of ``zuul.conf`` are used by the finger gateway:
+
+.. attr:: fingergw
+
+ .. attr:: command_socket
+ :default: /var/lib/zuul/fingergw.socket
+
+ Path to command socket file for the executor process.
+
+ .. attr:: listen_address
+ :default: all addresses
+
+ IP address or domain name on which to listen.
+
+ .. attr:: log_config
+
+ Path to log config file for the finger gateway process.
+
+ .. attr:: pidfile
+ :default: /var/run/zuul-fingergw/zuul-fingergw.pid
+
+ Path to PID lock file for the finger gateway process.
+
+ .. attr:: port
+ :default: 79
+
+ Port to use for the finger gateway. Note that since command line
+ finger clients cannot usually specify the port, leaving this set to
+ the default value is highly recommended.
+
+ .. attr:: user
+ :default: zuul
+
+ User ID for the zuul-fingergw process. In normal operation as a
+ daemon, the finger gateway should be started as the ``root`` user, but
+ it will drop privileges to this user during startup.
+
+Operation
+~~~~~~~~~
+
+To start the finger gateway, run ``zuul-fingergw``. To stop it, kill the
+PID which was saved in the pidfile specified in the configuration.
diff --git a/doc/source/admin/drivers/github.rst b/doc/source/admin/drivers/github.rst
index 7eebbdc..4f46af6 100644
--- a/doc/source/admin/drivers/github.rst
+++ b/doc/source/admin/drivers/github.rst
@@ -7,18 +7,103 @@
interact with the public GitHub service as well as site-local
installations of GitHub enterprise.
-.. TODO: make this section more user friendly
+Configure GitHub
+----------------
-Configure GitHub `webhook events
-<https://developer.github.com/webhooks/creating/>`_.
+There are two options currently available. GitHub's project owner can either
+manually setup web-hook or install a GitHub Application. In the first case,
+the project's owner needs to know the zuul endpoint and the webhook secrets.
-Set *Payload URL* to
-``http://<zuul-hostname>/connection/<connection-name>/payload``.
-Set *Content Type* to ``application/json``.
+Web-Hook
+........
+
+To configure a project's `webhook events
+<https://developer.github.com/webhooks/creating/>`_:
+
+* Set *Payload URL* to
+ ``http://<zuul-hostname>/connection/<connection-name>/payload``.
+
+* Set *Content Type* to ``application/json``.
Select *Events* you are interested in. See below for the supported events.
+You will also need to have a GitHub user created for your zuul:
+
+* Zuul public key needs to be added to the GitHub account
+
+* A api_token needs to be created too, see this `article
+ <https://help.github.com/articles/creating-an-access-token-for-command-line-use/>`_
+
+Then in the zuul.conf, set webhook_token and api_token.
+
+Application
+...........
+
+To create a `GitHub application
+<https://developer.github.com/apps/building-integrations/setting-up-and-registering-github-apps/registering-github-apps/>`_:
+
+* Go to your organization settings page to create the application, e.g.:
+ https://github.com/organizations/my-org/settings/apps/new
+
+* Set GitHub App name to "my-org-zuul"
+
+* Set Setup URL to your setup documentation, when user install the application
+ they are redirected to this url
+
+* Set Webhook URL to
+ ``http://<zuul-hostname>/connection/<connection-name>/payload``.
+
+* Create a Webhook secret
+
+* Set permissions:
+
+ * Commit statuses: Read & Write
+
+ * Issues: Read & Write
+
+ * Pull requests: Read & Write
+
+ * Repository contents: Read & Write (write to let zuul merge change)
+
+* Set events subscription:
+
+ * Label
+
+ * Status
+
+ * Issue comment
+
+ * Issues
+
+ * Pull request
+
+ * Pull request review
+
+ * Pull request review comment
+
+ * Commit comment
+
+ * Create
+
+ * Push
+
+ * Release
+
+* Set Where can this GitHub App be installed to "Any account"
+
+* Create the App
+
+* Generate a Private key in the app settings page
+
+Then in the zuul.conf, set webhook_token, app_id and app_key.
+After restarting zuul-scheduler, verify in the 'Advanced' tab that the
+Ping payload works (green tick and 200 response)
+
+Users can now install the application using its public page, e.g.:
+https://github.com/apps/my-org-zuul
+
+
Connection Configuration
------------------------
diff --git a/doc/source/admin/drivers/sql.rst b/doc/source/admin/drivers/sql.rst
index a269f5d..b9ce24b 100644
--- a/doc/source/admin/drivers/sql.rst
+++ b/doc/source/admin/drivers/sql.rst
@@ -43,6 +43,14 @@
<http://docs.sqlalchemy.org/en/latest/core/pooling.html#setting-pool-recycle>`_
for more information.
+ .. attr:: table_prefix
+ :default: ''
+
+ The string to prefix the table names. This makes it possible to run
+ several zuul deployments against the same database. This can be useful
+ if you rely on external databases which you don't have under control.
+ The default is to have no prefix.
+
Reporter Configuration
----------------------
diff --git a/doc/source/admin/tenants.rst b/doc/source/admin/tenants.rst
index 4722750..48e7ba8 100644
--- a/doc/source/admin/tenants.rst
+++ b/doc/source/admin/tenants.rst
@@ -105,7 +105,7 @@
changes in response to proposed changes, and Zuul will read
configuration files in all of their branches.
- .. attr:: <project>:
+ .. attr:: <project>
The items in the list may either be simple string values of
the project names, or a dictionary with the project name as
diff --git a/doc/source/user/config.rst b/doc/source/user/config.rst
index 3ea20ab..173e615 100644
--- a/doc/source/user/config.rst
+++ b/doc/source/user/config.rst
@@ -609,92 +609,6 @@
tags from all the jobs and variants used in constructing the
frozen job, with no duplication.
- .. attr:: branches
-
- A regular expression (or list of regular expressions) which
- describe on what branches a job should run (or in the case of
- variants: to alter the behavior of a job for a certain branch).
-
- If there is no job definition for a given job which matches the
- branch of an item, then that job is not run for the item.
- Otherwise, all of the job variants which match that branch (and
- any other selection criteria) are used when freezing the job.
-
- This example illustrates a job called *run-tests* which uses a
- nodeset based on the current release of an operating system to
- perform its tests, except when testing changes to the stable/2.0
- branch, in which case it uses an older release:
-
- .. code-block:: yaml
-
- - job:
- name: run-tests
- nodeset: current-release
-
- - job:
- name: run-tests
- branches: stable/2.0
- nodeset: old-release
-
- In some cases, Zuul uses an implied value for the branch
- specifier if none is supplied:
-
- * For a job definition in a :term:`config-project`, no implied
- branch specifier is used. If no branch specifier appears, the
- job applies to all branches.
-
- * In the case of an :term:`untrusted-project`, if the project
- has only one branch, no implied branch specifier is applied to
- :ref:`job` definitions. If the project has more than one
- branch, the branch containing the job definition is used as an
- implied branch specifier.
-
- * In the case of a job variant defined within a :ref:`project`,
- if the project definition is in a :term:`config-project`, no
- implied branch specifier is used. If it appears in an
- :term:`untrusted-project`, with no branch specifier, the
- branch containing the project definition is used as an implied
- branch specifier.
-
- * In the case of a job variant defined within a
- :ref:`project-template`, if no branch specifier appears, the
- implied branch containing the project-template definition is
- used as an implied branch specifier. This means that
- definitions of the same project-template on different branches
- may run different jobs.
-
- When that project-template is used by a :ref:`project`
- definition within a :term:`untrusted-project`, the branch
- containing that project definition is combined with the branch
- specifier of the project-template. This means it is possible
- for a project to use a template on one branch, but not on
- another.
-
- This allows for the very simple and expected workflow where if a
- project defines a job on the ``master`` branch with no branch
- specifier, and then creates a new branch based on ``master``,
- any changes to that job definition within the new branch only
- affect that branch, and likewise, changes to the master branch
- only affect it.
-
- See :attr:`pragma.implied-branch-matchers` for how to override
- this behavior on a per-file basis.
-
- .. attr:: files
-
- This attribute indicates that the job should only run on changes
- where the specified files are modified. This is a regular
- expression or list of regular expressions.
-
- .. attr:: irrelevant-files
-
- This is a negative complement of **files**. It indicates that
- the job should run unless *all* of the files changed match this
- list. In other words, if the regular expression ``docs/.*`` is
- supplied, then this job will not run if the only files changed
- are in the docs directory. A regular expression or list of
- regular expressions.
-
.. attr:: secrets
A list of secrets which may be used by the job. A
@@ -798,13 +712,6 @@
are run after the parent's. See :ref:`job` for more
information.
- .. warning::
-
- If the path as specified does not exist, Zuul will try
- appending the extensions ``.yaml`` and ``.yml``. This
- behavior is deprecated and will be removed in the future all
- playbook paths should include the file extension.
-
.. attr:: post-run
The name of a playbook or list of playbooks to run after the
@@ -815,13 +722,6 @@
playbooks are run before the parent's. See :ref:`job` for more
information.
- .. warning::
-
- If the path as specified does not exist, Zuul will try
- appending the extensions ``.yaml`` and ``.yml``. This
- behavior is deprecated and will be removed in the future all
- playbook paths should include the file extension.
-
.. attr:: run
The name of the main playbook for this job. If it is not
@@ -833,13 +733,6 @@
run: playbooks/job-playbook.yaml
- .. warning::
-
- If the path as specified does not exist, Zuul will try
- appending the extensions ``.yaml`` and ``.yml``. This
- behavior is deprecated and will be removed in the future all
- playbook paths should include the file extension.
-
.. attr:: roles
A list of Ansible roles to prepare for the job. Because a job
@@ -978,6 +871,99 @@
it will remain set for all child jobs and variants (it can not be
set to ``false``).
+ .. _matchers:
+
+ The following job attributes are considered "matchers". They are
+ not inherited in the usual manner, instead, these attributes are
+ used to determine whether a specific variant is used when
+ running a job.
+
+ .. attr:: branches
+
+ A regular expression (or list of regular expressions) which
+ describe on what branches a job should run (or in the case of
+ variants: to alter the behavior of a job for a certain branch).
+
+ If there is no job definition for a given job which matches the
+ branch of an item, then that job is not run for the item.
+ Otherwise, all of the job variants which match that branch (and
+ any other selection criteria) are used when freezing the job.
+
+ This example illustrates a job called *run-tests* which uses a
+ nodeset based on the current release of an operating system to
+ perform its tests, except when testing changes to the stable/2.0
+ branch, in which case it uses an older release:
+
+ .. code-block:: yaml
+
+ - job:
+ name: run-tests
+ nodeset: current-release
+
+ - job:
+ name: run-tests
+ branches: stable/2.0
+ nodeset: old-release
+
+ In some cases, Zuul uses an implied value for the branch
+ specifier if none is supplied:
+
+ * For a job definition in a :term:`config-project`, no implied
+ branch specifier is used. If no branch specifier appears, the
+ job applies to all branches.
+
+ * In the case of an :term:`untrusted-project`, if the project
+ has only one branch, no implied branch specifier is applied to
+ :ref:`job` definitions. If the project has more than one
+ branch, the branch containing the job definition is used as an
+ implied branch specifier.
+
+ * In the case of a job variant defined within a :ref:`project`,
+ if the project definition is in a :term:`config-project`, no
+ implied branch specifier is used. If it appears in an
+ :term:`untrusted-project`, with no branch specifier, the
+ branch containing the project definition is used as an implied
+ branch specifier.
+
+ * In the case of a job variant defined within a
+ :ref:`project-template`, if no branch specifier appears, the
+ implied branch containing the project-template definition is
+ used as an implied branch specifier. This means that
+ definitions of the same project-template on different branches
+ may run different jobs.
+
+ When that project-template is used by a :ref:`project`
+ definition within a :term:`untrusted-project`, the branch
+ containing that project definition is combined with the branch
+ specifier of the project-template. This means it is possible
+ for a project to use a template on one branch, but not on
+ another.
+
+ This allows for the very simple and expected workflow where if a
+ project defines a job on the ``master`` branch with no branch
+ specifier, and then creates a new branch based on ``master``,
+ any changes to that job definition within the new branch only
+ affect that branch, and likewise, changes to the master branch
+ only affect it.
+
+ See :attr:`pragma.implied-branch-matchers` for how to override
+ this behavior on a per-file basis.
+
+ .. attr:: files
+
+ This matcher indicates that the job should only run on changes
+ where the specified files are modified. This is a regular
+ expression or list of regular expressions.
+
+ .. attr:: irrelevant-files
+
+ This matcher is a negative complement of **files**. It
+ indicates that the job should run unless *all* of the files
+ changed match this list. In other words, if the regular
+ expression ``docs/.*`` is supplied, then this job will not run
+ if the only files changed are in the docs directory. A regular
+ expression or list of regular expressions.
+
.. _project:
Project
@@ -1210,7 +1196,9 @@
label: controller-label
- name: compute1
label: compute-label
- - name: compute2
+ - name:
+ - compute2
+ - web
label: compute-label
groups:
- name: ceph-osd
@@ -1221,6 +1209,9 @@
- controller
- compute1
- compute2
+ - name: ceph-web
+ nodes:
+ - web
.. attr:: nodeset
@@ -1242,6 +1233,9 @@
The name of the node. This will appear in the Ansible inventory
for the job.
+ This can also be as a list of strings. If so, then the list of hosts in
+ the Ansible inventory will share a common ansible_host address.
+
.. attr:: label
:required:
diff --git a/doc/source/user/encryption.rst b/doc/source/user/encryption.rst
index 7ced589..d45195f 100644
--- a/doc/source/user/encryption.rst
+++ b/doc/source/user/encryption.rst
@@ -15,9 +15,8 @@
which can be used by anyone to encrypt a secret and only Zuul is able
to decrypt it. Zuul serves each project's public key using its
build-in webserver. They can be fetched at the path
-``/keys/<source>/<project>.pub`` where ``<project>`` is the name of a
-project and ``<source>`` is the name of that project's connection in
-the main Zuul configuration file.
+``/<tenant>/<project>.pub`` where ``<project>`` is the canonical name
+of a project and ``<tenant>`` is the name of a tenant with that project.
Zuul currently supports one encryption scheme, PKCS#1 with OAEP, which
can not store secrets longer than the 3760 bits (derived from the key
diff --git a/doc/source/user/jobs.rst b/doc/source/user/jobs.rst
index 989338a..4b6255b 100644
--- a/doc/source/user/jobs.rst
+++ b/doc/source/user/jobs.rst
@@ -220,14 +220,15 @@
`src/git.example.com/org/project`.
.. var:: projects
- :type: list
+ :type: dict
- A list of all projects prepared by Zuul for the item. It
+ A dictionary of all projects prepared by Zuul for the item. It
includes, at least, the item's own project. It also includes
the projects of any items this item depends on, as well as the
projects that appear in :attr:`job.required-projects`.
- This is a list of dictionaries, with each element consisting of:
+ This is a dictionary of dictionaries. Each value has a key of
+ the `canonical_name`, then each entry consists of:
.. var:: name
@@ -264,6 +265,20 @@
This may be influenced by the branch or tag associated with
the item as well as the job configuration.
+ For example, to access the source directory of a single known
+ project, you might use::
+
+ {{ zuul.projects['git.example.com/org/project'].src_dir }}
+
+ To iterate over the project list, you might write a task
+ something like::
+
+ - name: Sample project iteration
+ debug:
+ msg: "Project {{ item.name }} is at {{ item.src_dir }}
+ with_items: {{ zuul.projects.values() | list }}
+
+
.. var:: _projects
:type: dict
@@ -525,7 +540,8 @@
A job may return some values to Zuul to affect its behavior and for
use by other jobs.. To return a value, use the ``zuul_return``
-Ansible module in a job playbook. For example:
+Ansible module in a job playbook running on the executor 'localhost' node.
+For example:
.. code-block:: yaml
diff --git a/etc/status/public_html/zuul.app.js b/etc/status/public_html/zuul.app.js
index 7ceb2dd..bf90a4d 100644
--- a/etc/status/public_html/zuul.app.js
+++ b/etc/status/public_html/zuul.app.js
@@ -28,8 +28,6 @@
function zuul_build_dom($, container) {
// Build a default-looking DOM
var default_layout = '<div class="container">'
- + '<h1>Zuul Status</h1>'
- + '<p>Real-time status monitor of Zuul, the pipeline manager between Gerrit and Workers.</p>'
+ '<div class="zuul-container" id="zuul-container">'
+ '<div style="display: none;" class="alert" id="zuul_msg"></div>'
+ '<button class="btn pull-right zuul-spinner">updating <span class="glyphicon glyphicon-refresh"></span></button>'
diff --git a/etc/zuul.conf-sample b/etc/zuul.conf-sample
index f0e1765..17092af 100644
--- a/etc/zuul.conf-sample
+++ b/etc/zuul.conf-sample
@@ -38,6 +38,7 @@
listen_address=127.0.0.1
port=9000
static_cache_expiry=0
+;sql_connection_name=mydatabase
[webapp]
listen_address=0.0.0.0
diff --git a/playbooks/zuul-stream/templates/ansible.cfg.j2 b/playbooks/zuul-stream/templates/ansible.cfg.j2
index 24f459e..41ffc0c 100644
--- a/playbooks/zuul-stream/templates/ansible.cfg.j2
+++ b/playbooks/zuul-stream/templates/ansible.cfg.j2
@@ -1,5 +1,5 @@
[defaults]
-hostfile = {{ ansible_user_dir }}/inventory.yaml
+inventory = {{ ansible_user_dir }}/inventory.yaml
gathering = smart
gather_subset = !all
lookup_plugins = {{ ansible_user_dir }}/src/git.openstack.org/openstack-infra/zuul/zuul/ansible/lookup
diff --git a/requirements.txt b/requirements.txt
index 4b8be3c..39a2b02 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7,11 +7,7 @@
Paste
WebOb>=1.2.3
paramiko>=1.8.0,<2.0.0
-# Using a local fork of gitpython until at least these changes are in a
-# release.
-# https://github.com/gitpython-developers/GitPython/pull/682
-# https://github.com/gitpython-developers/GitPython/pull/686
-git+https://github.com/jeblair/GitPython.git@zuul#egg=GitPython
+GitPython>=2.1.8
python-daemon>=2.0.4,<2.1.0
extras
statsd>=1.0.0,<3.0
diff --git a/setup.cfg b/setup.cfg
index 63ff562..dea3158 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -28,6 +28,7 @@
zuul-bwrap = zuul.driver.bubblewrap:main
zuul-web = zuul.cmd.web:main
zuul-migrate = zuul.cmd.migrate:main
+ zuul-fingergw = zuul.cmd.fingergw:main
[build_sphinx]
source-dir = doc/source
diff --git a/tests/base.py b/tests/base.py
index f274ed6..69d9f55 100755
--- a/tests/base.py
+++ b/tests/base.py
@@ -1435,7 +1435,7 @@
host['host_vars']['ansible_connection'] = 'local'
hosts.append(dict(
- name='localhost',
+ name=['localhost'],
host_vars=dict(ansible_connection='local'),
host_keys=[]))
return hosts
@@ -2066,10 +2066,16 @@
FIXTURE_DIR,
self.config.get('scheduler', 'tenant_config')))
self.config.set('scheduler', 'state_dir', self.state_root)
+ self.config.set(
+ 'scheduler', 'command_socket',
+ os.path.join(self.test_root, 'scheduler.socket'))
self.config.set('merger', 'git_dir', self.merger_src_root)
self.config.set('executor', 'git_dir', self.executor_src_root)
self.config.set('executor', 'private_key_file', self.private_key_file)
self.config.set('executor', 'state_dir', self.executor_state_root)
+ self.config.set(
+ 'executor', 'command_socket',
+ os.path.join(self.test_root, 'executor.socket'))
self.statsd = FakeStatsd()
if self.config.has_section('statsd'):
@@ -2256,13 +2262,13 @@
branch='master', tag='init')
if 'job' in item:
if 'run' in item['job']:
- files['%s.yaml' % item['job']['run']] = ''
+ files['%s' % item['job']['run']] = ''
for fn in zuul.configloader.as_list(
item['job'].get('pre-run', [])):
- files['%s.yaml' % fn] = ''
+ files['%s' % fn] = ''
for fn in zuul.configloader.as_list(
item['job'].get('post-run', [])):
- files['%s.yaml' % fn] = ''
+ files['%s' % fn] = ''
root = os.path.join(self.test_root, "config")
if not os.path.exists(root):
@@ -2415,7 +2421,7 @@
'pydevd.CommandThread',
'pydevd.Reader',
'pydevd.Writer',
- 'FingerStreamer',
+ 'socketserver_Thread',
]
threads = [t for t in threading.enumerate()
if t.name not in whitelist]
diff --git a/tests/fixtures/config/ansible/git/common-config/zuul.yaml b/tests/fixtures/config/ansible/git/common-config/zuul.yaml
index 28bfce1..d0a8f7b 100644
--- a/tests/fixtures/config/ansible/git/common-config/zuul.yaml
+++ b/tests/fixtures/config/ansible/git/common-config/zuul.yaml
@@ -129,10 +129,10 @@
parent: base-urls
name: hello
run: playbooks/hello-post.yaml
- post-run: playbooks/hello-post
+ post-run: playbooks/hello-post.yaml
- job:
parent: python27
name: failpost
run: playbooks/post-broken.yaml
- post-run: playbooks/post-broken
+ post-run: playbooks/post-broken.yaml
diff --git a/tests/fixtures/config/branch-variants/git/project-config/zuul.yaml b/tests/fixtures/config/branch-variants/git/project-config/zuul.yaml
index 161e5a1..48da2d4 100644
--- a/tests/fixtures/config/branch-variants/git/project-config/zuul.yaml
+++ b/tests/fixtures/config/branch-variants/git/project-config/zuul.yaml
@@ -34,10 +34,10 @@
- job:
name: base
parent: null
- pre-run: playbooks/base/pre
+ pre-run: playbooks/base/pre.yaml
post-run:
- - playbooks/base/post-ssh
- - playbooks/base/post-logs
+ - playbooks/base/post-ssh.yaml
+ - playbooks/base/post-logs.yaml
- project:
name: project-config
diff --git a/tests/fixtures/config/branch-variants/git/puppet-integration/.zuul.yaml b/tests/fixtures/config/branch-variants/git/puppet-integration/.zuul.yaml
index 322927f..7e9cbc3 100644
--- a/tests/fixtures/config/branch-variants/git/puppet-integration/.zuul.yaml
+++ b/tests/fixtures/config/branch-variants/git/puppet-integration/.zuul.yaml
@@ -1,16 +1,16 @@
- job:
name: puppet-base
- pre-run: playbooks/prepare-node-common
+ pre-run: playbooks/prepare-node-common.yaml
- job:
name: puppet-module-base
parent: puppet-base
- pre-run: playbooks/prepare-node-unit
+ pre-run: playbooks/prepare-node-unit.yaml
- job:
name: puppet-lint
parent: puppet-module-base
- run: playbooks/run-lint
+ run: playbooks/run-lint.yaml
tags:
- master
diff --git a/tests/fixtures/config/branch-variants/git/puppet-integration/stable.zuul.yaml b/tests/fixtures/config/branch-variants/git/puppet-integration/stable.zuul.yaml
index 4701b80..74704a0 100644
--- a/tests/fixtures/config/branch-variants/git/puppet-integration/stable.zuul.yaml
+++ b/tests/fixtures/config/branch-variants/git/puppet-integration/stable.zuul.yaml
@@ -1,16 +1,16 @@
- job:
name: puppet-base
- pre-run: playbooks/prepare-node-common
+ pre-run: playbooks/prepare-node-common.yaml
- job:
name: puppet-module-base
parent: puppet-base
- pre-run: playbooks/prepare-node-unit
+ pre-run: playbooks/prepare-node-unit.yaml
- job:
name: puppet-lint
parent: puppet-module-base
- run: playbooks/run-lint
+ run: playbooks/run-lint.yaml
tags:
- stable
diff --git a/tests/fixtures/config/inventory/git/common-config/zuul.yaml b/tests/fixtures/config/inventory/git/common-config/zuul.yaml
index 74ddf2d..ad530a7 100644
--- a/tests/fixtures/config/inventory/git/common-config/zuul.yaml
+++ b/tests/fixtures/config/inventory/git/common-config/zuul.yaml
@@ -52,6 +52,16 @@
run: playbooks/single-inventory.yaml
- job:
+ name: single-inventory-list
+ nodeset:
+ nodes:
+ - name:
+ - compute
+ - controller
+ label: ubuntu-xenial
+ run: playbooks/single-inventory.yaml
+
+- job:
name: group-inventory
nodeset: nodeset1
run: playbooks/group-inventory.yaml
diff --git a/tests/fixtures/config/inventory/git/org_project/.zuul.yaml b/tests/fixtures/config/inventory/git/org_project/.zuul.yaml
index 1a8bf5d..6a29049 100644
--- a/tests/fixtures/config/inventory/git/org_project/.zuul.yaml
+++ b/tests/fixtures/config/inventory/git/org_project/.zuul.yaml
@@ -3,5 +3,6 @@
check:
jobs:
- single-inventory
+ - single-inventory-list
- group-inventory
- hostvars-inventory
diff --git a/tests/fixtures/config/job-output/git/common-config/zuul.yaml b/tests/fixtures/config/job-output/git/common-config/zuul.yaml
index 4df0020..9373038 100644
--- a/tests/fixtures/config/job-output/git/common-config/zuul.yaml
+++ b/tests/fixtures/config/job-output/git/common-config/zuul.yaml
@@ -23,8 +23,8 @@
- job:
name: job-output-failure
- run: playbooks/job-output
- post-run: playbooks/job-output-failure-post
+ run: playbooks/job-output.yaml
+ post-run: playbooks/job-output-failure-post.yaml
- project:
name: org/project
diff --git a/tests/fixtures/config/post-playbook/git/common-config/zuul.yaml b/tests/fixtures/config/post-playbook/git/common-config/zuul.yaml
index 16d7dee..b00d4c2 100644
--- a/tests/fixtures/config/post-playbook/git/common-config/zuul.yaml
+++ b/tests/fixtures/config/post-playbook/git/common-config/zuul.yaml
@@ -18,8 +18,8 @@
- job:
name: python27
- pre-run: playbooks/pre
- post-run: playbooks/post
+ pre-run: playbooks/pre.yaml
+ post-run: playbooks/post.yaml
vars:
waitpath: '{{zuul._test.test_root}}/{{zuul.build}}/test_wait'
run: playbooks/python27.yaml
diff --git a/tests/fixtures/config/pre-playbook/git/common-config/zuul.yaml b/tests/fixtures/config/pre-playbook/git/common-config/zuul.yaml
index 7817745..16f48b1 100644
--- a/tests/fixtures/config/pre-playbook/git/common-config/zuul.yaml
+++ b/tests/fixtures/config/pre-playbook/git/common-config/zuul.yaml
@@ -18,6 +18,6 @@
- job:
name: python27
- pre-run: playbooks/pre
- post-run: playbooks/post
+ pre-run: playbooks/pre.yaml
+ post-run: playbooks/post.yaml
run: playbooks/python27.yaml
diff --git a/tests/fixtures/config/tenant-parser/git/common-config/zuul.yaml b/tests/fixtures/config/tenant-parser/git/common-config/zuul.yaml
index e21f967..a28ef54 100644
--- a/tests/fixtures/config/tenant-parser/git/common-config/zuul.yaml
+++ b/tests/fixtures/config/tenant-parser/git/common-config/zuul.yaml
@@ -18,8 +18,10 @@
- job:
name: common-config-job
+# Use the canonical name here. This should be merged with the org/project1 in
+# the other repo.
- project:
- name: org/project1
+ name: review.example.com/org/project1
check:
jobs:
- common-config-job
diff --git a/tests/fixtures/zuul-sql-driver-prefix.conf b/tests/fixtures/zuul-sql-driver-prefix.conf
new file mode 100644
index 0000000..1406474
--- /dev/null
+++ b/tests/fixtures/zuul-sql-driver-prefix.conf
@@ -0,0 +1,28 @@
+[gearman]
+server=127.0.0.1
+
+[scheduler]
+tenant_config=main.yaml
+
+[merger]
+git_dir=/tmp/zuul-test/merger-git
+git_user_email=zuul@example.com
+git_user_name=zuul
+
+[executor]
+git_dir=/tmp/zuul-test/executor-git
+
+[connection gerrit]
+driver=gerrit
+server=review.example.com
+user=jenkins
+sshkey=fake_id_rsa1
+
+[connection resultsdb]
+driver=sql
+dburi=$MYSQL_FIXTURE_DBURI$
+table_prefix=prefix_
+
+[connection resultsdb_failures]
+driver=sql
+dburi=$MYSQL_FIXTURE_DBURI$
diff --git a/tests/unit/test_connection.py b/tests/unit/test_connection.py
index c882d3a..054ee5f 100644
--- a/tests/unit/test_connection.py
+++ b/tests/unit/test_connection.py
@@ -60,14 +60,19 @@
class TestSQLConnection(ZuulDBTestCase):
config_file = 'zuul-sql-driver.conf'
tenant_config_file = 'config/sql-driver/main.yaml'
+ expected_table_prefix = ''
- def test_sql_tables_created(self, metadata_table=None):
+ def test_sql_tables_created(self):
"Test the tables for storing results are created properly"
- buildset_table = 'zuul_buildset'
- build_table = 'zuul_build'
- insp = sa.engine.reflection.Inspector(
- self.connections.connections['resultsdb'].engine)
+ connection = self.connections.connections['resultsdb']
+ insp = sa.engine.reflection.Inspector(connection.engine)
+
+ table_prefix = connection.table_prefix
+ self.assertEqual(self.expected_table_prefix, table_prefix)
+
+ buildset_table = table_prefix + 'zuul_buildset'
+ build_table = table_prefix + 'zuul_build'
self.assertEqual(13, len(insp.get_columns(buildset_table)))
self.assertEqual(10, len(insp.get_columns(build_table)))
@@ -216,6 +221,11 @@
'Build failed.', buildsets_resultsdb_failures[0]['message'])
+class TestSQLConnectionPrefix(TestSQLConnection):
+ config_file = 'zuul-sql-driver-prefix.conf'
+ expected_table_prefix = 'prefix_'
+
+
class TestConnectionsBadSQL(ZuulDBTestCase):
config_file = 'zuul-sql-driver-bad.conf'
tenant_config_file = 'config/sql-driver/main.yaml'
diff --git a/tests/unit/test_executor.py b/tests/unit/test_executor.py
index 5d27663..474859d 100755
--- a/tests/unit/test_executor.py
+++ b/tests/unit/test_executor.py
@@ -416,15 +416,15 @@
job)
def test_getHostList_host_keys(self):
- # Test without ssh_port set
+ # Test without connection_port set
node = {'name': 'fake-host',
'host_keys': ['fake-host-key'],
'interface_ip': 'localhost'}
keys = self.test_job.getHostList({'nodes': [node]})[0]['host_keys']
self.assertEqual(keys[0], 'localhost fake-host-key')
- # Test with custom ssh_port set
- node['ssh_port'] = 22022
+ # Test with custom connection_port set
+ node['connection_port'] = 22022
keys = self.test_job.getHostList({'nodes': [node]})[0]['host_keys']
self.assertEqual(keys[0], '[localhost]:22022 fake-host-key')
diff --git a/tests/unit/test_inventory.py b/tests/unit/test_inventory.py
index 04dcb05..1c41f5f 100644
--- a/tests/unit/test_inventory.py
+++ b/tests/unit/test_inventory.py
@@ -57,6 +57,26 @@
self.executor_server.release()
self.waitUntilSettled()
+ def test_single_inventory_list(self):
+
+ inventory = self._get_build_inventory('single-inventory-list')
+
+ all_nodes = ('compute', 'controller')
+ self.assertIn('all', inventory)
+ self.assertIn('hosts', inventory['all'])
+ self.assertIn('vars', inventory['all'])
+ for node_name in all_nodes:
+ self.assertIn(node_name, inventory['all']['hosts'])
+ self.assertIn('zuul', inventory['all']['vars'])
+ z_vars = inventory['all']['vars']['zuul']
+ self.assertIn('executor', z_vars)
+ self.assertIn('src_root', z_vars['executor'])
+ self.assertIn('job', z_vars)
+ self.assertEqual(z_vars['job'], 'single-inventory-list')
+
+ self.executor_server.release()
+ self.waitUntilSettled()
+
def test_group_inventory(self):
inventory = self._get_build_inventory('group-inventory')
diff --git a/tests/unit/test_nodepool.py b/tests/unit/test_nodepool.py
index d3f9ddb..aa0f082 100644
--- a/tests/unit/test_nodepool.py
+++ b/tests/unit/test_nodepool.py
@@ -67,8 +67,8 @@
# Test a simple node request
nodeset = model.NodeSet()
- nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
- nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['controller', 'foo'], 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['compute'], 'ubuntu-xenial'))
job = model.Job('testjob')
job.nodeset = nodeset
request = self.nodepool.requestNodes(None, job)
@@ -99,8 +99,8 @@
# Test that node requests are re-submitted after disconnect
nodeset = model.NodeSet()
- nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
- nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['controller'], 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['compute'], 'ubuntu-xenial'))
job = model.Job('testjob')
job.nodeset = nodeset
self.fake_nodepool.paused = True
@@ -116,8 +116,8 @@
# Test that node requests can be canceled
nodeset = model.NodeSet()
- nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
- nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['controller'], 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['compute'], 'ubuntu-xenial'))
job = model.Job('testjob')
job.nodeset = nodeset
self.fake_nodepool.paused = True
@@ -131,8 +131,8 @@
# Test that a resubmitted request would not lock nodes
nodeset = model.NodeSet()
- nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
- nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['controller'], 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['compute'], 'ubuntu-xenial'))
job = model.Job('testjob')
job.nodeset = nodeset
request = self.nodepool.requestNodes(None, job)
@@ -152,8 +152,8 @@
# Test that a lost request would not lock nodes
nodeset = model.NodeSet()
- nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
- nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['controller'], 'ubuntu-xenial'))
+ nodeset.addNode(model.Node(['compute'], 'ubuntu-xenial'))
job = model.Job('testjob')
job.nodeset = nodeset
request = self.nodepool.requestNodes(None, job)
diff --git a/tests/unit/test_scheduler.py b/tests/unit/test_scheduler.py
index cad557e..aacc81e 100755
--- a/tests/unit/test_scheduler.py
+++ b/tests/unit/test_scheduler.py
@@ -2581,7 +2581,7 @@
self.assertEqual('project-merge', status_jobs[0]['name'])
# TODO(mordred) pull uuids from self.builds
self.assertEqual(
- 'static/stream.html?uuid={uuid}&logfile=console.log'.format(
+ 'stream.html?uuid={uuid}&logfile=console.log'.format(
uuid=status_jobs[0]['uuid']),
status_jobs[0]['url'])
self.assertEqual(
@@ -2597,7 +2597,7 @@
status_jobs[0]['report_url'])
self.assertEqual('project-test1', status_jobs[1]['name'])
self.assertEqual(
- 'static/stream.html?uuid={uuid}&logfile=console.log'.format(
+ 'stream.html?uuid={uuid}&logfile=console.log'.format(
uuid=status_jobs[1]['uuid']),
status_jobs[1]['url'])
self.assertEqual(
@@ -2613,7 +2613,7 @@
self.assertEqual('project-test2', status_jobs[2]['name'])
self.assertEqual(
- 'static/stream.html?uuid={uuid}&logfile=console.log'.format(
+ 'stream.html?uuid={uuid}&logfile=console.log'.format(
uuid=status_jobs[2]['uuid']),
status_jobs[2]['url'])
self.assertEqual(
@@ -4210,7 +4210,7 @@
self.assertEqual('gate', job['pipeline'])
self.assertEqual(False, job['retry'])
self.assertEqual(
- 'static/stream.html?uuid={uuid}&logfile=console.log'
+ 'stream.html?uuid={uuid}&logfile=console.log'
.format(uuid=job['uuid']), job['url'])
self.assertEqual(
'finger://{hostname}/{uuid}'.format(
diff --git a/tests/unit/test_log_streamer.py b/tests/unit/test_streaming.py
similarity index 68%
rename from tests/unit/test_log_streamer.py
rename to tests/unit/test_streaming.py
index c808540..4bb541a 100644
--- a/tests/unit/test_log_streamer.py
+++ b/tests/unit/test_streaming.py
@@ -28,6 +28,7 @@
import zuul.web
import zuul.lib.log_streamer
+import zuul.lib.fingergw
import tests.base
@@ -60,7 +61,7 @@
class TestStreaming(tests.base.AnsibleZuulTestCase):
tenant_config_file = 'config/streamer/main.yaml'
- log = logging.getLogger("zuul.test.test_log_streamer.TestStreaming")
+ log = logging.getLogger("zuul.test_streaming")
def setUp(self):
super(TestStreaming, self).setUp()
@@ -158,7 +159,7 @@
def runWSClient(self, build_uuid, event):
async def client(loop, build_uuid, event):
- uri = 'http://[::1]:9000/console-stream'
+ uri = 'http://[::1]:9000/tenant-one/console-stream'
try:
session = aiohttp.ClientSession(loop=loop)
async with session.ws_connect(uri) as ws:
@@ -181,9 +182,38 @@
loop.run_until_complete(client(loop, build_uuid, event))
loop.close()
+ def runFingerClient(self, build_uuid, gateway_address, event):
+ # Wait until the gateway is started
+ while True:
+ try:
+ # NOTE(Shrews): This causes the gateway to begin to handle
+ # a request for which it never receives data, and thus
+ # causes the getCommand() method to timeout (seen in the
+ # test results, but is harmless).
+ with socket.create_connection(gateway_address) as s:
+ break
+ except ConnectionRefusedError:
+ time.sleep(0.1)
+
+ with socket.create_connection(gateway_address) as s:
+ msg = "%s\n" % build_uuid
+ s.sendall(msg.encode('utf-8'))
+ event.set() # notify we are connected and req sent
+ while True:
+ data = s.recv(1024)
+ if not data:
+ break
+ self.streaming_data += data.decode('utf-8')
+ s.shutdown(socket.SHUT_RDWR)
+
def test_websocket_streaming(self):
+ # Start the finger streamer daemon
+ streamer = zuul.lib.log_streamer.LogStreamer(
+ None, self.host, 0, self.executor_server.jobdir_root)
+ self.addCleanup(streamer.stop)
+
# Need to set the streaming port before submitting the job
- finger_port = 7902
+ finger_port = streamer.server.socket.getsockname()[1]
self.executor_server.log_streaming_port = finger_port
A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
@@ -216,11 +246,6 @@
logfile = open(ansible_log, 'r')
self.addCleanup(logfile.close)
- # Start the finger streamer daemon
- streamer = zuul.lib.log_streamer.LogStreamer(
- None, self.host, finger_port, self.executor_server.jobdir_root)
- self.addCleanup(streamer.stop)
-
# Start the web server
web_server = zuul.web.ZuulWeb(
listen_address='::', listen_port=9000,
@@ -265,3 +290,83 @@
self.log.debug("\n\nFile contents: %s\n\n", file_contents)
self.log.debug("\n\nStreamed: %s\n\n", self.ws_client_results)
self.assertEqual(file_contents, self.ws_client_results)
+
+ def test_finger_gateway(self):
+ # Start the finger streamer daemon
+ streamer = zuul.lib.log_streamer.LogStreamer(
+ None, self.host, 0, self.executor_server.jobdir_root)
+ self.addCleanup(streamer.stop)
+ finger_port = streamer.server.socket.getsockname()[1]
+
+ # Need to set the streaming port before submitting the job
+ self.executor_server.log_streaming_port = finger_port
+
+ A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+ self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+
+ # We don't have any real synchronization for the ansible jobs, so
+ # just wait until we get our running build.
+ while not len(self.builds):
+ time.sleep(0.1)
+ build = self.builds[0]
+ self.assertEqual(build.name, 'python27')
+
+ build_dir = os.path.join(self.executor_server.jobdir_root, build.uuid)
+ while not os.path.exists(build_dir):
+ time.sleep(0.1)
+
+ # Need to wait to make sure that jobdir gets set
+ while build.jobdir is None:
+ time.sleep(0.1)
+ build = self.builds[0]
+
+ # Wait for the job to begin running and create the ansible log file.
+ # The job waits to complete until the flag file exists, so we can
+ # safely access the log here. We only open it (to force a file handle
+ # to be kept open for it after the job finishes) but wait to read the
+ # contents until the job is done.
+ ansible_log = os.path.join(build.jobdir.log_root, 'job-output.txt')
+ while not os.path.exists(ansible_log):
+ time.sleep(0.1)
+ logfile = open(ansible_log, 'r')
+ self.addCleanup(logfile.close)
+
+ # Start the finger gateway daemon
+ gateway = zuul.lib.fingergw.FingerGateway(
+ ('127.0.0.1', self.gearman_server.port, None, None, None),
+ (self.host, 0),
+ user=None,
+ command_socket=None,
+ pid_file=None
+ )
+ gateway.start()
+ self.addCleanup(gateway.stop)
+
+ gateway_port = gateway.server.socket.getsockname()[1]
+ gateway_address = (self.host, gateway_port)
+
+ # Start a thread with the finger client
+ finger_client_event = threading.Event()
+ self.finger_client_results = ''
+ finger_client_thread = threading.Thread(
+ target=self.runFingerClient,
+ args=(build.uuid, gateway_address, finger_client_event)
+ )
+ finger_client_thread.start()
+ finger_client_event.wait()
+
+ # Allow the job to complete
+ flag_file = os.path.join(build_dir, 'test_wait')
+ open(flag_file, 'w').close()
+
+ # Wait for the finger client to complete, which it should when
+ # it's received the full log.
+ finger_client_thread.join()
+
+ self.waitUntilSettled()
+
+ file_contents = logfile.read()
+ logfile.close()
+ self.log.debug("\n\nFile contents: %s\n\n", file_contents)
+ self.log.debug("\n\nStreamed: %s\n\n", self.streaming_data)
+ self.assertEqual(file_contents, self.streaming_data)
diff --git a/tests/unit/test_v3.py b/tests/unit/test_v3.py
index 54cf111..1f401d0 100755
--- a/tests/unit/test_v3.py
+++ b/tests/unit/test_v3.py
@@ -935,6 +935,27 @@
self.assertIn('not a dictionary', A.messages[0],
"A should have a syntax error reported")
+ def test_yaml_duplicate_key_error(self):
+ in_repo_conf = textwrap.dedent(
+ """
+ - job:
+ name: foo
+ name: bar
+ """)
+
+ file_dict = {'.zuul.yaml': in_repo_conf}
+ A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A',
+ files=file_dict)
+ A.addApproval('Code-Review', 2)
+ self.fake_gerrit.addEvent(A.addApproval('Approved', 1))
+ self.waitUntilSettled()
+
+ self.assertEqual(A.data['status'], 'NEW')
+ self.assertEqual(A.reported, 1,
+ "A should report failure")
+ self.assertIn('appears more than once', A.messages[0],
+ "A should have a syntax error reported")
+
def test_yaml_key_error(self):
in_repo_conf = textwrap.dedent(
"""
@@ -1935,8 +1956,8 @@
name: parent
roles:
- zuul: bare-role
- pre-run: playbooks/parent-pre
- post-run: playbooks/parent-post
+ pre-run: playbooks/parent-pre.yaml
+ post-run: playbooks/parent-post.yaml
- job:
name: project-test
diff --git a/tools/encrypt_secret.py b/tools/encrypt_secret.py
index 9b52846..2a4ea1d 100755
--- a/tools/encrypt_secret.py
+++ b/tools/encrypt_secret.py
@@ -43,10 +43,7 @@
parser.add_argument('url',
help="The base URL of the zuul server and tenant. "
"E.g., https://zuul.example.com/tenant-name")
- # TODO(jeblair,mordred): When projects have canonical names, use that here.
# TODO(jeblair): Throw a fit if SSL is not used.
- parser.add_argument('source',
- help="The Zuul source of the project.")
parser.add_argument('project',
help="The name of the project.")
parser.add_argument('--infile',
@@ -61,8 +58,7 @@
"to standard output.")
args = parser.parse_args()
- req = Request("%s/keys/%s/%s.pub" % (
- args.url, args.source, args.project))
+ req = Request("%s/%s.pub" % (args.url, args.project))
pubkey = urlopen(req)
if args.infile:
diff --git a/tools/test-logs.sh b/tools/test-logs.sh
index bf2147d..a514dd8 100644
--- a/tools/test-logs.sh
+++ b/tools/test-logs.sh
@@ -42,7 +42,7 @@
cat >$WORK_DIR/ansible.cfg <<EOF
[defaults]
-hostfile = $INVENTORY
+inventory = $INVENTORY
gathering = smart
gather_subset = !all
fact_caching = jsonfile
diff --git a/tox.ini b/tox.ini
index 28d6000..5efc4c0 100644
--- a/tox.ini
+++ b/tox.ini
@@ -41,9 +41,6 @@
[testenv:venv]
commands = {posargs}
-[testenv:validate-layout]
-commands = zuul-server -c etc/zuul.conf-sample -t -l {posargs}
-
[testenv:nodepool]
setenv =
OS_TEST_PATH = ./tests/nodepool
diff --git a/zuul/ansible/callback/zuul_stream.py b/zuul/ansible/callback/zuul_stream.py
index 8845e9b..df28a57 100644
--- a/zuul/ansible/callback/zuul_stream.py
+++ b/zuul/ansible/callback/zuul_stream.py
@@ -150,7 +150,7 @@
buff += more
if buff:
self._log_streamline(
- host, line.decode("utf-8", "backslashreplace"))
+ host, buff.decode("utf-8", "backslashreplace"))
def _log_streamline(self, host, line):
if "[Zuul] Task exit code" in line:
diff --git a/zuul/ansible/library/zuul_return.py b/zuul/ansible/library/zuul_return.py
index 9f3332b..4935226 100644
--- a/zuul/ansible/library/zuul_return.py
+++ b/zuul/ansible/library/zuul_return.py
@@ -63,7 +63,7 @@
path = os.path.join(os.environ['ZUUL_JOBDIR'], 'work',
'results.json')
set_value(path, p['data'], p['file'])
- module.exit_json(changed=True, e=os.environ)
+ module.exit_json(changed=True, e=os.environ.copy())
from ansible.module_utils.basic import * # noqa
from ansible.module_utils.basic import AnsibleModule
diff --git a/zuul/cmd/__init__.py b/zuul/cmd/__init__.py
index e150f9c..236fd9f 100755
--- a/zuul/cmd/__init__.py
+++ b/zuul/cmd/__init__.py
@@ -23,6 +23,7 @@
import logging.config
import os
import signal
+import socket
import sys
import traceback
import threading
@@ -184,3 +185,12 @@
pass
with daemon.DaemonContext(pidfile=pid):
self.run()
+
+ def send_command(self, cmd):
+ command_socket = get_default(
+ self.config, self.app_name, 'command_socket',
+ '/var/lib/zuul/%s.socket' % self.app_name)
+ s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
+ s.connect(command_socket)
+ cmd = '%s\n' % cmd
+ s.sendall(cmd.encode('utf8'))
diff --git a/zuul/cmd/executor.py b/zuul/cmd/executor.py
index aef8c95..ade9715 100755
--- a/zuul/cmd/executor.py
+++ b/zuul/cmd/executor.py
@@ -18,7 +18,6 @@
import logging
import os
import pwd
-import socket
import sys
import signal
import tempfile
@@ -52,15 +51,6 @@
if self.args.command:
self.args.nodaemon = True
- def send_command(self, cmd):
- state_dir = get_default(self.config, 'executor', 'state_dir',
- '/var/lib/zuul', expand_user=True)
- path = os.path.join(state_dir, 'executor.socket')
- s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
- s.connect(path)
- cmd = '%s\n' % cmd
- s.sendall(cmd.encode('utf8'))
-
def exit_handler(self):
self.executor.stop()
self.executor.join()
diff --git a/zuul/cmd/fingergw.py b/zuul/cmd/fingergw.py
new file mode 100644
index 0000000..920eed8
--- /dev/null
+++ b/zuul/cmd/fingergw.py
@@ -0,0 +1,109 @@
+#!/usr/bin/env python
+# Copyright 2017 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+import signal
+import sys
+
+import zuul.cmd
+import zuul.lib.fingergw
+
+from zuul.lib.config import get_default
+
+
+class FingerGatewayApp(zuul.cmd.ZuulDaemonApp):
+ '''
+ Class for the daemon that will distribute any finger requests to the
+ appropriate Zuul executor handling the specified build UUID.
+ '''
+ app_name = 'fingergw'
+ app_description = 'The Zuul finger gateway.'
+
+ def __init__(self):
+ super(FingerGatewayApp, self).__init__()
+ self.gateway = None
+
+ def createParser(self):
+ parser = super(FingerGatewayApp, self).createParser()
+ parser.add_argument('command',
+ choices=zuul.lib.fingergw.COMMANDS,
+ nargs='?')
+ return parser
+
+ def parseArguments(self, args=None):
+ super(FingerGatewayApp, self).parseArguments()
+ if self.args.command:
+ self.args.nodaemon = True
+
+ def run(self):
+ '''
+ Main entry point for the FingerGatewayApp.
+
+ Called by the main() method of the parent class.
+ '''
+ if self.args.command in zuul.lib.fingergw.COMMANDS:
+ self.send_command(self.args.command)
+ sys.exit(0)
+
+ self.setup_logging('fingergw', 'log_config')
+ self.log = logging.getLogger('zuul.fingergw')
+
+ # Get values from configuration file
+ host = get_default(self.config, 'fingergw', 'listen_address', '::')
+ port = int(get_default(self.config, 'fingergw', 'port', 79))
+ user = get_default(self.config, 'fingergw', 'user', 'zuul')
+ cmdsock = get_default(
+ self.config, 'fingergw', 'command_socket',
+ '/var/lib/zuul/%s.socket' % self.app_name)
+ gear_server = get_default(self.config, 'gearman', 'server')
+ gear_port = get_default(self.config, 'gearman', 'port', 4730)
+ ssl_key = get_default(self.config, 'gearman', 'ssl_key')
+ ssl_cert = get_default(self.config, 'gearman', 'ssl_cert')
+ ssl_ca = get_default(self.config, 'gearman', 'ssl_ca')
+
+ self.gateway = zuul.lib.fingergw.FingerGateway(
+ (gear_server, gear_port, ssl_key, ssl_cert, ssl_ca),
+ (host, port),
+ user,
+ cmdsock,
+ self.getPidFile(),
+ )
+
+ self.log.info('Starting Zuul finger gateway app')
+ self.gateway.start()
+
+ if self.args.nodaemon:
+ # NOTE(Shrews): When running in non-daemon mode, although sending
+ # the 'stop' command via the command socket will shutdown the
+ # gateway, it's still necessary to Ctrl+C to stop the app.
+ while True:
+ try:
+ signal.pause()
+ except KeyboardInterrupt:
+ print("Ctrl + C: asking gateway to exit nicely...\n")
+ self.stop()
+ break
+ else:
+ self.gateway.wait()
+
+ self.log.info('Stopped Zuul finger gateway app')
+
+ def stop(self):
+ if self.gateway:
+ self.gateway.stop()
+
+
+def main():
+ FingerGatewayApp().main()
diff --git a/zuul/cmd/merger.py b/zuul/cmd/merger.py
index 56b6b44..7db1bee 100755
--- a/zuul/cmd/merger.py
+++ b/zuul/cmd/merger.py
@@ -15,8 +15,10 @@
# under the License.
import signal
+import sys
import zuul.cmd
+import zuul.merger.server
# No zuul imports here because they pull in paramiko which must not be
# imported until after the daemonization.
@@ -28,14 +30,28 @@
app_name = 'merger'
app_description = 'A standalone Zuul merger.'
- def exit_handler(self, signum, frame):
- signal.signal(signal.SIGUSR1, signal.SIG_IGN)
+ def createParser(self):
+ parser = super(Merger, self).createParser()
+ parser.add_argument('command',
+ choices=zuul.merger.server.COMMANDS,
+ nargs='?')
+ return parser
+
+ def parseArguments(self, args=None):
+ super(Merger, self).parseArguments()
+ if self.args.command:
+ self.args.nodaemon = True
+
+ def exit_handler(self):
self.merger.stop()
self.merger.join()
def run(self):
# See comment at top of file about zuul imports
import zuul.merger.server
+ if self.args.command in zuul.merger.server.COMMANDS:
+ self.send_command(self.args.command)
+ sys.exit(0)
self.configure_connections(source_only=True)
@@ -45,14 +61,18 @@
self.connections)
self.merger.start()
- signal.signal(signal.SIGUSR1, self.exit_handler)
signal.signal(signal.SIGUSR2, zuul.cmd.stack_dump_handler)
- while True:
- try:
- signal.pause()
- except KeyboardInterrupt:
- print("Ctrl + C: asking merger to exit nicely...\n")
- self.exit_handler(signal.SIGINT, None)
+
+ if self.args.nodaemon:
+ while True:
+ try:
+ signal.pause()
+ except KeyboardInterrupt:
+ print("Ctrl + C: asking merger to exit nicely...\n")
+ self.exit_handler()
+ sys.exit(0)
+ else:
+ self.merger.join()
def main():
diff --git a/zuul/cmd/scheduler.py b/zuul/cmd/scheduler.py
index 539d55b..7722d6e 100755
--- a/zuul/cmd/scheduler.py
+++ b/zuul/cmd/scheduler.py
@@ -22,6 +22,7 @@
import zuul.cmd
from zuul.lib.config import get_default
from zuul.lib.statsd import get_statsd_config
+import zuul.scheduler
# No zuul imports here because they pull in paramiko which must not be
# imported until after the daemonization.
@@ -37,6 +38,18 @@
super(Scheduler, self).__init__()
self.gear_server_pid = None
+ def createParser(self):
+ parser = super(Scheduler, self).createParser()
+ parser.add_argument('command',
+ choices=zuul.scheduler.COMMANDS,
+ nargs='?')
+ return parser
+
+ def parseArguments(self, args=None):
+ super(Scheduler, self).parseArguments()
+ if self.args.command:
+ self.args.nodaemon = True
+
def reconfigure_handler(self, signum, frame):
signal.signal(signal.SIGHUP, signal.SIG_IGN)
self.log.debug("Reconfiguration triggered")
@@ -48,8 +61,7 @@
self.log.exception("Reconfiguration failed:")
signal.signal(signal.SIGHUP, self.reconfigure_handler)
- def exit_handler(self, signum, frame):
- signal.signal(signal.SIGUSR1, signal.SIG_IGN)
+ def exit_handler(self):
self.sched.exit()
self.sched.join()
self.stop_gear_server()
@@ -104,6 +116,10 @@
def run(self):
# See comment at top of file about zuul imports
import zuul.scheduler
+ if self.args.command in zuul.scheduler.COMMANDS:
+ self.send_command(self.args.command)
+ sys.exit(0)
+ # See comment at top of file about zuul imports
import zuul.executor.client
import zuul.merger.client
import zuul.nodepool
@@ -162,14 +178,17 @@
webapp.start()
signal.signal(signal.SIGHUP, self.reconfigure_handler)
- signal.signal(signal.SIGUSR1, self.exit_handler)
- signal.signal(signal.SIGTERM, self.term_handler)
- while True:
- try:
- signal.pause()
- except KeyboardInterrupt:
- print("Ctrl + C: asking scheduler to exit nicely...\n")
- self.exit_handler(signal.SIGINT, None)
+
+ if self.args.nodaemon:
+ while True:
+ try:
+ signal.pause()
+ except KeyboardInterrupt:
+ print("Ctrl + C: asking scheduler to exit nicely...\n")
+ self.exit_handler()
+ sys.exit(0)
+ else:
+ self.sched.join()
def main():
diff --git a/zuul/cmd/web.py b/zuul/cmd/web.py
index 6e5489f..4687de6 100755
--- a/zuul/cmd/web.py
+++ b/zuul/cmd/web.py
@@ -22,6 +22,7 @@
import zuul.cmd
import zuul.web
+from zuul.driver.sql import sqlconnection
from zuul.lib.config import get_default
@@ -48,6 +49,30 @@
params['ssl_cert'] = get_default(self.config, 'gearman', 'ssl_cert')
params['ssl_ca'] = get_default(self.config, 'gearman', 'ssl_ca')
+ sql_conn_name = get_default(self.config, 'web',
+ 'sql_connection_name')
+ sql_conn = None
+ if sql_conn_name:
+ # we want a specific sql connection
+ sql_conn = self.connections.connections.get(sql_conn_name)
+ if not sql_conn:
+ self.log.error("Couldn't find sql connection '%s'" %
+ sql_conn_name)
+ sys.exit(1)
+ else:
+ # look for any sql connection
+ connections = [c for c in self.connections.connections.values()
+ if isinstance(c, sqlconnection.SQLConnection)]
+ if len(connections) > 1:
+ self.log.error("Multiple sql connection found, "
+ "set the sql_connection_name option "
+ "in zuul.conf [web] section")
+ sys.exit(1)
+ if connections:
+ # use this sql connection by default
+ sql_conn = connections[0]
+ params['sql_connection'] = sql_conn
+
try:
self.web = zuul.web.ZuulWeb(**params)
except Exception as e:
@@ -79,6 +104,10 @@
self.setup_logging('web', 'log_config')
self.log = logging.getLogger("zuul.WebServer")
+ self.configure_connections()
+
+ signal.signal(signal.SIGUSR2, zuul.cmd.stack_dump_handler)
+
try:
self._run()
except Exception:
diff --git a/zuul/configloader.py b/zuul/configloader.py
index 99f10f6..227e352 100644
--- a/zuul/configloader.py
+++ b/zuul/configloader.py
@@ -152,6 +152,40 @@
super(ProjectNotPermittedError, self).__init__(message)
+class YAMLDuplicateKeyError(ConfigurationSyntaxError):
+ def __init__(self, key, node, context, start_mark):
+ intro = textwrap.fill(textwrap.dedent("""\
+ Zuul encountered a syntax error while parsing its configuration in the
+ repo {repo} on branch {branch}. The error was:""".format(
+ repo=context.project.name,
+ branch=context.branch,
+ )))
+
+ e = textwrap.fill(textwrap.dedent("""\
+ The key "{key}" appears more than once; duplicate keys are not
+ permitted.
+ """.format(
+ key=key,
+ )))
+
+ m = textwrap.dedent("""\
+ {intro}
+
+ {error}
+
+ The error appears in the following stanza:
+
+ {content}
+
+ {start_mark}""")
+
+ m = m.format(intro=intro,
+ error=indent(str(e)),
+ content=indent(start_mark.snippet.rstrip()),
+ start_mark=str(start_mark))
+ super(YAMLDuplicateKeyError, self).__init__(m)
+
+
def indent(s):
return '\n'.join([' ' + x for x in s.split('\n')])
@@ -249,6 +283,14 @@
self.zuul_stream = stream
def construct_mapping(self, node, deep=False):
+ keys = set()
+ for k, v in node.value:
+ if k.value in keys:
+ mark = ZuulMark(node.start_mark, node.end_mark,
+ self.zuul_stream)
+ raise YAMLDuplicateKeyError(k.value, node, self.zuul_context,
+ mark)
+ keys.add(k.value)
r = super(ZuulSafeLoader, self).construct_mapping(node, deep)
keys = frozenset(r.keys())
if len(keys) == 1 and keys.intersection(self.zuul_node_types):
@@ -340,7 +382,7 @@
class NodeSetParser(object):
@staticmethod
def getSchema(anonymous=False):
- node = {vs.Required('name'): str,
+ node = {vs.Required('name'): to_list(str),
vs.Required('label'): str,
}
@@ -365,11 +407,13 @@
node_names = set()
group_names = set()
for conf_node in as_list(conf['nodes']):
- if conf_node['name'] in node_names:
- raise DuplicateNodeError(conf['name'], conf_node['name'])
- node = model.Node(conf_node['name'], conf_node['label'])
+ for name in as_list(conf_node['name']):
+ if name in node_names:
+ raise DuplicateNodeError(name, conf_node['name'])
+ node = model.Node(as_list(conf_node['name']), conf_node['label'])
ns.addNode(node)
- node_names.add(conf_node['name'])
+ for name in as_list(conf_node['name']):
+ node_names.add(name)
for conf_group in as_list(conf.get('groups', [])):
for node_name in as_list(conf_group['nodes']):
if node_name not in node_names:
@@ -517,6 +561,7 @@
# "job.run.append(...)").
job = model.Job(name)
+ job.description = conf.get('description')
job.source_context = conf.get('_source_context')
job.source_line = conf.get('_start_mark').line + 1
@@ -1161,8 +1206,8 @@
tenant.config_projects,
tenant.untrusted_projects,
cached, tenant)
- unparsed_config.extend(tenant.config_projects_config)
- unparsed_config.extend(tenant.untrusted_projects_config)
+ unparsed_config.extend(tenant.config_projects_config, tenant=tenant)
+ unparsed_config.extend(tenant.untrusted_projects_config, tenant=tenant)
tenant.layout = TenantParser._parseLayout(base, tenant,
unparsed_config,
scheduler,
diff --git a/zuul/driver/sql/alembic/env.py b/zuul/driver/sql/alembic/env.py
index 4542a22..8cf2ecf 100644
--- a/zuul/driver/sql/alembic/env.py
+++ b/zuul/driver/sql/alembic/env.py
@@ -55,6 +55,13 @@
prefix='sqlalchemy.',
poolclass=pool.NullPool)
+ # we can get the table prefix via the tag object
+ tag = context.get_tag_argument()
+ if tag and isinstance(tag, dict):
+ table_prefix = tag.get('table_prefix', '')
+ else:
+ table_prefix = ''
+
with connectable.connect() as connection:
context.configure(
connection=connection,
@@ -62,7 +69,7 @@
)
with context.begin_transaction():
- context.run_migrations()
+ context.run_migrations(table_prefix=table_prefix)
if context.is_offline_mode():
diff --git a/zuul/driver/sql/alembic/versions/1dd914d4a482_allow_score_to_be_null.py b/zuul/driver/sql/alembic/versions/1dd914d4a482_allow_score_to_be_null.py
index b153cab..f42c2f3 100644
--- a/zuul/driver/sql/alembic/versions/1dd914d4a482_allow_score_to_be_null.py
+++ b/zuul/driver/sql/alembic/versions/1dd914d4a482_allow_score_to_be_null.py
@@ -16,8 +16,8 @@
import sqlalchemy as sa
-def upgrade():
- op.alter_column('zuul_buildset', 'score', nullable=True,
+def upgrade(table_prefix=''):
+ op.alter_column(table_prefix + 'zuul_buildset', 'score', nullable=True,
existing_type=sa.Integer)
diff --git a/zuul/driver/sql/alembic/versions/20126015a87d_add_indexes.py b/zuul/driver/sql/alembic/versions/20126015a87d_add_indexes.py
index 12e7c09..906df21 100644
--- a/zuul/driver/sql/alembic/versions/20126015a87d_add_indexes.py
+++ b/zuul/driver/sql/alembic/versions/20126015a87d_add_indexes.py
@@ -32,24 +32,28 @@
BUILD_TABLE = 'zuul_build'
-def upgrade():
+def upgrade(table_prefix=''):
+ prefixed_buildset = table_prefix + BUILDSET_TABLE
+ prefixed_build = table_prefix + BUILD_TABLE
+
# To allow a dashboard to show a per-project view, optionally filtered
# by pipeline.
op.create_index(
- 'project_pipeline_idx', BUILDSET_TABLE, ['project', 'pipeline'])
+ 'project_pipeline_idx', prefixed_buildset, ['project', 'pipeline'])
# To allow a dashboard to show a per-project-change view
op.create_index(
- 'project_change_idx', BUILDSET_TABLE, ['project', 'change'])
+ 'project_change_idx', prefixed_buildset, ['project', 'change'])
# To allow a dashboard to show a per-change view
- op.create_index('change_idx', BUILDSET_TABLE, ['change'])
+ op.create_index('change_idx', prefixed_buildset, ['change'])
# To allow a dashboard to show a job lib view. buildset_id is included
# so that it's a covering index and can satisfy the join back to buildset
# without an additional lookup.
op.create_index(
- 'job_name_buildset_id_idx', BUILD_TABLE, ['job_name', 'buildset_id'])
+ 'job_name_buildset_id_idx', prefixed_build,
+ ['job_name', 'buildset_id'])
def downgrade():
diff --git a/zuul/driver/sql/alembic/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py b/zuul/driver/sql/alembic/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py
index 783196f..b78f830 100644
--- a/zuul/driver/sql/alembic/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py
+++ b/zuul/driver/sql/alembic/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py
@@ -19,9 +19,9 @@
BUILD_TABLE = 'zuul_build'
-def upgrade():
+def upgrade(table_prefix=''):
op.create_table(
- BUILDSET_TABLE,
+ table_prefix + BUILDSET_TABLE,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('zuul_ref', sa.String(255)),
sa.Column('pipeline', sa.String(255)),
@@ -34,10 +34,10 @@
)
op.create_table(
- BUILD_TABLE,
+ table_prefix + BUILD_TABLE,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('buildset_id', sa.Integer,
- sa.ForeignKey(BUILDSET_TABLE + ".id")),
+ sa.ForeignKey(table_prefix + BUILDSET_TABLE + ".id")),
sa.Column('uuid', sa.String(36)),
sa.Column('job_name', sa.String(255)),
sa.Column('result', sa.String(255)),
diff --git a/zuul/driver/sql/alembic/versions/5efb477fa963_add_ref_url_column.py b/zuul/driver/sql/alembic/versions/5efb477fa963_add_ref_url_column.py
index f9c3535..5502425 100644
--- a/zuul/driver/sql/alembic/versions/5efb477fa963_add_ref_url_column.py
+++ b/zuul/driver/sql/alembic/versions/5efb477fa963_add_ref_url_column.py
@@ -30,8 +30,9 @@
import sqlalchemy as sa
-def upgrade():
- op.add_column('zuul_buildset', sa.Column('ref_url', sa.String(255)))
+def upgrade(table_prefix=''):
+ op.add_column(
+ table_prefix + 'zuul_buildset', sa.Column('ref_url', sa.String(255)))
def downgrade():
diff --git a/zuul/driver/sql/alembic/versions/60c119eb1e3f_use_build_set_results.py b/zuul/driver/sql/alembic/versions/60c119eb1e3f_use_build_set_results.py
index 985eb0c..67581a6 100644
--- a/zuul/driver/sql/alembic/versions/60c119eb1e3f_use_build_set_results.py
+++ b/zuul/driver/sql/alembic/versions/60c119eb1e3f_use_build_set_results.py
@@ -18,8 +18,9 @@
BUILDSET_TABLE = 'zuul_buildset'
-def upgrade():
- op.add_column(BUILDSET_TABLE, sa.Column('result', sa.String(255)))
+def upgrade(table_prefix=''):
+ op.add_column(
+ table_prefix + BUILDSET_TABLE, sa.Column('result', sa.String(255)))
connection = op.get_bind()
connection.execute(
@@ -29,9 +30,9 @@
SELECT CASE score
WHEN 1 THEN 'SUCCESS'
ELSE 'FAILURE' END)
- """.format(buildset_table=BUILDSET_TABLE))
+ """.format(buildset_table=table_prefix + BUILDSET_TABLE))
- op.drop_column(BUILDSET_TABLE, 'score')
+ op.drop_column(table_prefix + BUILDSET_TABLE, 'score')
def downgrade():
diff --git a/zuul/driver/sql/alembic/versions/ba4cdce9b18c_add_rev_columns.py b/zuul/driver/sql/alembic/versions/ba4cdce9b18c_add_rev_columns.py
index dc75983..3e60866 100644
--- a/zuul/driver/sql/alembic/versions/ba4cdce9b18c_add_rev_columns.py
+++ b/zuul/driver/sql/alembic/versions/ba4cdce9b18c_add_rev_columns.py
@@ -16,9 +16,11 @@
import sqlalchemy as sa
-def upgrade():
- op.add_column('zuul_buildset', sa.Column('oldrev', sa.String(255)))
- op.add_column('zuul_buildset', sa.Column('newrev', sa.String(255)))
+def upgrade(table_prefix=''):
+ op.add_column(
+ table_prefix + 'zuul_buildset', sa.Column('oldrev', sa.String(255)))
+ op.add_column(
+ table_prefix + 'zuul_buildset', sa.Column('newrev', sa.String(255)))
def downgrade():
diff --git a/zuul/driver/sql/alembic/versions/f86c9871ee67_add_tenant_column.py b/zuul/driver/sql/alembic/versions/f86c9871ee67_add_tenant_column.py
index 4087af3..84fd0ef 100644
--- a/zuul/driver/sql/alembic/versions/f86c9871ee67_add_tenant_column.py
+++ b/zuul/driver/sql/alembic/versions/f86c9871ee67_add_tenant_column.py
@@ -30,8 +30,9 @@
import sqlalchemy as sa
-def upgrade():
- op.add_column('zuul_buildset', sa.Column('tenant', sa.String(255)))
+def upgrade(table_prefix=''):
+ op.add_column(
+ table_prefix + 'zuul_buildset', sa.Column('tenant', sa.String(255)))
def downgrade():
diff --git a/zuul/driver/sql/sqlconnection.py b/zuul/driver/sql/sqlconnection.py
index b964c0b..285d0c2 100644
--- a/zuul/driver/sql/sqlconnection.py
+++ b/zuul/driver/sql/sqlconnection.py
@@ -15,6 +15,7 @@
import logging
import alembic
+import alembic.command
import alembic.config
import sqlalchemy as sa
import sqlalchemy.pool
@@ -39,6 +40,8 @@
self.engine = None
self.connection = None
self.tables_established = False
+ self.table_prefix = self.connection_config.get('table_prefix', '')
+
try:
self.dburi = self.connection_config.get('dburi')
# Recycle connections if they've been idle for more than 1 second.
@@ -49,7 +52,6 @@
poolclass=sqlalchemy.pool.QueuePool,
pool_recycle=self.connection_config.get('pool_recycle', 1))
self._migrate()
- self._setup_tables()
self.zuul_buildset_table, self.zuul_build_table \
= self._setup_tables()
self.tables_established = True
@@ -75,14 +77,16 @@
config.set_main_option("sqlalchemy.url",
self.connection_config.get('dburi'))
- alembic.command.upgrade(config, 'head')
+ # Alembic lets us add arbitrary data in the tag argument. We can
+ # leverage that to tell the upgrade scripts about the table prefix.
+ tag = {'table_prefix': self.table_prefix}
+ alembic.command.upgrade(config, 'head', tag=tag)
- @staticmethod
- def _setup_tables():
+ def _setup_tables(self):
metadata = sa.MetaData()
zuul_buildset_table = sa.Table(
- BUILDSET_TABLE, metadata,
+ self.table_prefix + BUILDSET_TABLE, metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('zuul_ref', sa.String(255)),
sa.Column('pipeline', sa.String(255)),
@@ -99,10 +103,11 @@
)
zuul_build_table = sa.Table(
- BUILD_TABLE, metadata,
+ self.table_prefix + BUILD_TABLE, metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('buildset_id', sa.Integer,
- sa.ForeignKey(BUILDSET_TABLE + ".id")),
+ sa.ForeignKey(self.table_prefix +
+ BUILDSET_TABLE + ".id")),
sa.Column('uuid', sa.String(36)),
sa.Column('job_name', sa.String(255)),
sa.Column('result', sa.String(255)),
diff --git a/zuul/executor/client.py b/zuul/executor/client.py
index a8b94f0..06c2087 100644
--- a/zuul/executor/client.py
+++ b/zuul/executor/client.py
@@ -180,8 +180,7 @@
if (hasattr(item.change, 'newrev') and item.change.newrev
and item.change.newrev != '0' * 40):
zuul_params['newrev'] = item.change.newrev
- zuul_params['projects'] = [] # Set below
- zuul_params['_projects'] = {} # transitional to convert to dict
+ zuul_params['projects'] = {} # Set below
zuul_params['items'] = dependent_changes
params = dict()
@@ -253,7 +252,7 @@
params['projects'].append(make_project_dict(project))
projects.add(project)
for p in projects:
- zuul_params['_projects'][p.canonical_name] = (dict(
+ zuul_params['projects'][p.canonical_name] = (dict(
name=p.name,
short_name=p.name.split('/')[-1],
# Duplicate this into the dict too, so that iterating
@@ -265,12 +264,10 @@
))
# We are transitioning "projects" from a list to a dict
# indexed by canonical name, as it is much easier to access
- # values in ansible. Existing callers are converted to
- # "_projects", then once "projects" is unused we switch it,
- # then convert callers back. Finally when "_projects" is
- # unused it will be removed.
- for cn, p in zuul_params['_projects'].items():
- zuul_params['projects'].append(p)
+ # values in ansible. Existing callers have been converted to
+ # "_projects" and "projects" is swapped; we will convert users
+ # back to "projects" and remove this soon.
+ zuul_params['_projects'] = zuul_params['projects']
build = Build(job, uuid)
build.parameters = params
diff --git a/zuul/executor/server.py b/zuul/executor/server.py
index 016d0e6..7a93f89 100644
--- a/zuul/executor/server.py
+++ b/zuul/executor/server.py
@@ -497,7 +497,8 @@
hosts = {}
for node in nodes:
- hosts[node['name']] = node['host_vars']
+ for name in node['name']:
+ hosts[name] = node['host_vars']
inventory = {
'all': {
@@ -910,7 +911,7 @@
# results in the wrong thing being in interface_ip
# TODO(jeblair): Move this notice to the docs.
ip = node.get('interface_ip')
- port = node.get('ssh_port', 22)
+ port = node.get('connection_port', node.get('ssh_port', 22))
host_vars = dict(
ansible_host=ip,
ansible_user=self.executor_server.default_username,
@@ -958,13 +959,11 @@
"non-trusted repo." % (entry, path))
def findPlaybook(self, path, trusted=False):
- for ext in ['', '.yaml', '.yml']:
- fn = path + ext
- if os.path.exists(fn):
- if not trusted:
- playbook_dir = os.path.dirname(os.path.abspath(fn))
- self._blockPluginDirs(playbook_dir)
- return fn
+ if os.path.exists(path):
+ if not trusted:
+ playbook_dir = os.path.dirname(os.path.abspath(path))
+ self._blockPluginDirs(playbook_dir)
+ return path
raise ExecutorError("Unable to find playbook %s" % path)
def preparePlaybooks(self, args):
@@ -1187,7 +1186,7 @@
callback_path = self.executor_server.callback_dir
with open(jobdir_playbook.ansible_config, 'w') as config:
config.write('[defaults]\n')
- config.write('hostfile = %s\n' % self.jobdir.inventory)
+ config.write('inventory = %s\n' % self.jobdir.inventory)
config.write('local_tmp = %s/local_tmp\n' %
self.jobdir.ansible_cache_root)
config.write('retry_files_enabled = False\n')
@@ -1607,10 +1606,13 @@
self.merger = self._getMerger(self.merge_root)
self.update_queue = DeduplicateQueue()
+ command_socket = get_default(
+ self.config, 'executor', 'command_socket',
+ '/var/lib/zuul/executor.socket')
+ self.command_socket = commandsocket.CommandSocket(command_socket)
+
state_dir = get_default(self.config, 'executor', 'state_dir',
'/var/lib/zuul', expand_user=True)
- path = os.path.join(state_dir, 'executor.socket')
- self.command_socket = commandsocket.CommandSocket(path)
ansible_dir = os.path.join(state_dir, 'ansible')
self.ansible_dir = ansible_dir
if os.path.exists(ansible_dir):
diff --git a/zuul/lib/fingergw.py b/zuul/lib/fingergw.py
new file mode 100644
index 0000000..c89ed0f
--- /dev/null
+++ b/zuul/lib/fingergw.py
@@ -0,0 +1,206 @@
+#!/usr/bin/env python
+# Copyright 2017 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import functools
+import logging
+import socket
+import threading
+
+import zuul.rpcclient
+
+from zuul.lib import commandsocket
+from zuul.lib import streamer_utils
+
+
+COMMANDS = ['stop']
+
+
+class RequestHandler(streamer_utils.BaseFingerRequestHandler):
+ '''
+ Class implementing the logic for handling a single finger request.
+ '''
+
+ log = logging.getLogger("zuul.fingergw")
+
+ def __init__(self, *args, **kwargs):
+ self.rpc = kwargs.pop('rpc')
+ super(RequestHandler, self).__init__(*args, **kwargs)
+
+ def _fingerClient(self, server, port, build_uuid):
+ '''
+ Open a finger connection and return all streaming results.
+
+ :param server: The remote server.
+ :param port: The remote port.
+ :param build_uuid: The build UUID to stream.
+
+ Both IPv4 and IPv6 are supported.
+ '''
+ with socket.create_connection((server, port), timeout=10) as s:
+ msg = "%s\n" % build_uuid # Must have a trailing newline!
+ s.sendall(msg.encode('utf-8'))
+ while True:
+ data = s.recv(1024)
+ if data:
+ self.request.sendall(data)
+ else:
+ break
+
+ def handle(self):
+ '''
+ This method is called by the socketserver framework to handle an
+ incoming request.
+ '''
+ try:
+ build_uuid = self.getCommand()
+ port_location = self.rpc.get_job_log_stream_address(build_uuid)
+ self._fingerClient(
+ port_location['server'],
+ port_location['port'],
+ build_uuid,
+ )
+ except Exception:
+ self.log.exception('Finger request handling exception:')
+ msg = 'Internal streaming error'
+ self.request.sendall(msg.encode('utf-8'))
+ return
+
+
+class FingerGateway(object):
+ '''
+ Class implementing the finger multiplexing/gateway logic.
+
+ For each incoming finger request, a new thread is started that will
+ be responsible for finding which Zuul executor is executing the
+ requested build (by asking Gearman), forwarding the request to that
+ executor, and streaming the results back to our client.
+ '''
+
+ log = logging.getLogger("zuul.fingergw")
+
+ def __init__(self, gearman, address, user, command_socket, pid_file):
+ '''
+ Initialize the finger gateway.
+
+ :param tuple gearman: Gearman connection information. This should
+ include the server, port, SSL key, SSL cert, and SSL CA.
+ :param tuple address: The address and port to bind to for our gateway.
+ :param str user: The user to which we should drop privileges after
+ binding to our address.
+ :param str command_socket: Path to the daemon command socket.
+ :param str pid_file: Path to the daemon PID file.
+ '''
+ self.gear_server = gearman[0]
+ self.gear_port = gearman[1]
+ self.gear_ssl_key = gearman[2]
+ self.gear_ssl_cert = gearman[3]
+ self.gear_ssl_ca = gearman[4]
+ self.address = address
+ self.user = user
+ self.pid_file = pid_file
+
+ self.rpc = None
+ self.server = None
+ self.server_thread = None
+
+ self.command_thread = None
+ self.command_running = False
+ self.command_socket = command_socket
+
+ self.command_map = dict(
+ stop=self.stop,
+ )
+
+ def _runCommand(self):
+ while self.command_running:
+ try:
+ command = self.command_socket.get().decode('utf8')
+ if command != '_stop':
+ self.command_map[command]()
+ else:
+ return
+ except Exception:
+ self.log.exception("Exception while processing command")
+
+ def _run(self):
+ try:
+ self.server.serve_forever()
+ except Exception:
+ self.log.exception('Abnormal termination:')
+ raise
+
+ def start(self):
+ self.rpc = zuul.rpcclient.RPCClient(
+ self.gear_server,
+ self.gear_port,
+ self.gear_ssl_key,
+ self.gear_ssl_cert,
+ self.gear_ssl_ca)
+
+ self.server = streamer_utils.CustomThreadingTCPServer(
+ self.address,
+ functools.partial(RequestHandler, rpc=self.rpc),
+ user=self.user,
+ pid_file=self.pid_file)
+
+ # Start the command processor after the server and privilege drop
+ if self.command_socket:
+ self.log.debug("Starting command processor")
+ self.command_socket = commandsocket.CommandSocket(
+ self.command_socket)
+ self.command_socket.start()
+ self.command_running = True
+ self.command_thread = threading.Thread(
+ target=self._runCommand, name='command')
+ self.command_thread.daemon = True
+ self.command_thread.start()
+
+ # The socketserver shutdown() call will hang unless the call
+ # to server_forever() happens in another thread. So let's do that.
+ self.server_thread = threading.Thread(target=self._run)
+ self.server_thread.daemon = True
+ self.server_thread.start()
+ self.log.info("Finger gateway is started")
+
+ def stop(self):
+ if self.command_socket:
+ self.command_running = False
+ try:
+ self.command_socket.stop()
+ except Exception:
+ self.log.exception("Error stopping command socket:")
+
+ if self.server:
+ try:
+ self.server.shutdown()
+ self.server.server_close()
+ self.server = None
+ except Exception:
+ self.log.exception("Error stopping TCP server:")
+
+ if self.rpc:
+ try:
+ self.rpc.shutdown()
+ self.rpc = None
+ except Exception:
+ self.log.exception("Error stopping RCP client:")
+
+ self.log.info("Finger gateway is stopped")
+
+ def wait(self):
+ '''
+ Wait on the gateway to shutdown.
+ '''
+ self.server_thread.join()
diff --git a/zuul/lib/log_streamer.py b/zuul/lib/log_streamer.py
index 1906be7..5c894b4 100644
--- a/zuul/lib/log_streamer.py
+++ b/zuul/lib/log_streamer.py
@@ -18,14 +18,13 @@
import logging
import os
import os.path
-import pwd
import re
import select
-import socket
-import socketserver
import threading
import time
+from zuul.lib import streamer_utils
+
class Log(object):
@@ -38,7 +37,7 @@
self.size = self.stat.st_size
-class RequestHandler(socketserver.BaseRequestHandler):
+class RequestHandler(streamer_utils.BaseFingerRequestHandler):
'''
Class to handle a single log streaming request.
@@ -46,47 +45,13 @@
the (class/method/attribute) names were changed to protect the innocent.
'''
- MAX_REQUEST_LEN = 1024
- REQUEST_TIMEOUT = 10
-
- # NOTE(Shrews): We only use this to log exceptions since a new process
- # is used per-request (and having multiple processes write to the same
- # log file constantly is bad).
- log = logging.getLogger("zuul.log_streamer.RequestHandler")
-
- def get_command(self):
- poll = select.poll()
- bitmask = (select.POLLIN | select.POLLERR |
- select.POLLHUP | select.POLLNVAL)
- poll.register(self.request, bitmask)
- buffer = b''
- ret = None
- start = time.time()
- while True:
- elapsed = time.time() - start
- timeout = max(self.REQUEST_TIMEOUT - elapsed, 0)
- if not timeout:
- raise Exception("Timeout while waiting for input")
- for fd, event in poll.poll(timeout):
- if event & select.POLLIN:
- buffer += self.request.recv(self.MAX_REQUEST_LEN)
- else:
- raise Exception("Received error event")
- if len(buffer) >= self.MAX_REQUEST_LEN:
- raise Exception("Request too long")
- try:
- ret = buffer.decode('utf-8')
- x = ret.find('\n')
- if x > 0:
- return ret[:x]
- except UnicodeDecodeError:
- pass
+ log = logging.getLogger("zuul.log_streamer")
def handle(self):
try:
- build_uuid = self.get_command()
+ build_uuid = self.getCommand()
except Exception:
- self.log.exception("Failure during get_command:")
+ self.log.exception("Failure during getCommand:")
msg = 'Internal streaming error'
self.request.sendall(msg.encode("utf-8"))
return
@@ -182,59 +147,11 @@
return False
-class CustomThreadingTCPServer(socketserver.ThreadingTCPServer):
- '''
- Custom version that allows us to drop privileges after port binding.
- '''
- address_family = socket.AF_INET6
+class LogStreamerServer(streamer_utils.CustomThreadingTCPServer):
def __init__(self, *args, **kwargs):
- self.user = kwargs.pop('user')
self.jobdir_root = kwargs.pop('jobdir_root')
- # For some reason, setting custom attributes does not work if we
- # call the base class __init__ first. Wha??
- socketserver.ThreadingTCPServer.__init__(self, *args, **kwargs)
-
- def change_privs(self):
- '''
- Drop our privileges to the zuul user.
- '''
- if os.getuid() != 0:
- return
- pw = pwd.getpwnam(self.user)
- os.setgroups([])
- os.setgid(pw.pw_gid)
- os.setuid(pw.pw_uid)
- os.umask(0o022)
-
- def server_bind(self):
- self.allow_reuse_address = True
- socketserver.ThreadingTCPServer.server_bind(self)
- if self.user:
- self.change_privs()
-
- def server_close(self):
- '''
- Overridden from base class to shutdown the socket immediately.
- '''
- try:
- self.socket.shutdown(socket.SHUT_RD)
- self.socket.close()
- except socket.error as e:
- # If it's already closed, don't error.
- if e.errno == socket.EBADF:
- return
- raise
-
- def process_request(self, request, client_address):
- '''
- Overridden from the base class to name the thread.
- '''
- t = threading.Thread(target=self.process_request_thread,
- name='FingerStreamer',
- args=(request, client_address))
- t.daemon = self.daemon_threads
- t.start()
+ super(LogStreamerServer, self).__init__(*args, **kwargs)
class LogStreamer(object):
@@ -243,12 +160,12 @@
'''
def __init__(self, user, host, port, jobdir_root):
- self.log = logging.getLogger('zuul.lib.LogStreamer')
+ self.log = logging.getLogger('zuul.log_streamer')
self.log.debug("LogStreamer starting on port %s", port)
- self.server = CustomThreadingTCPServer((host, port),
- RequestHandler,
- user=user,
- jobdir_root=jobdir_root)
+ self.server = LogStreamerServer((host, port),
+ RequestHandler,
+ user=user,
+ jobdir_root=jobdir_root)
# We start the actual serving within a thread so we can return to
# the owner.
diff --git a/zuul/lib/streamer_utils.py b/zuul/lib/streamer_utils.py
new file mode 100644
index 0000000..985f3c3
--- /dev/null
+++ b/zuul/lib/streamer_utils.py
@@ -0,0 +1,130 @@
+#!/usr/bin/env python
+# Copyright 2017 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+'''
+This file contains code common to finger log streaming functionality.
+The log streamer process within each executor, the finger gateway service,
+and the web interface will all make use of this module.
+'''
+
+import os
+import pwd
+import select
+import socket
+import socketserver
+import threading
+import time
+
+
+class BaseFingerRequestHandler(socketserver.BaseRequestHandler):
+ '''
+ Base class for common methods for handling finger requests.
+ '''
+
+ MAX_REQUEST_LEN = 1024
+ REQUEST_TIMEOUT = 10
+
+ def getCommand(self):
+ poll = select.poll()
+ bitmask = (select.POLLIN | select.POLLERR |
+ select.POLLHUP | select.POLLNVAL)
+ poll.register(self.request, bitmask)
+ buffer = b''
+ ret = None
+ start = time.time()
+ while True:
+ elapsed = time.time() - start
+ timeout = max(self.REQUEST_TIMEOUT - elapsed, 0)
+ if not timeout:
+ raise Exception("Timeout while waiting for input")
+ for fd, event in poll.poll(timeout):
+ if event & select.POLLIN:
+ buffer += self.request.recv(self.MAX_REQUEST_LEN)
+ else:
+ raise Exception("Received error event")
+ if len(buffer) >= self.MAX_REQUEST_LEN:
+ raise Exception("Request too long")
+ try:
+ ret = buffer.decode('utf-8')
+ x = ret.find('\n')
+ if x > 0:
+ return ret[:x]
+ except UnicodeDecodeError:
+ pass
+
+
+class CustomThreadingTCPServer(socketserver.ThreadingTCPServer):
+ '''
+ Custom version that allows us to drop privileges after port binding.
+ '''
+
+ address_family = socket.AF_INET6
+
+ def __init__(self, *args, **kwargs):
+ self.user = kwargs.pop('user')
+ self.pid_file = kwargs.pop('pid_file', None)
+ socketserver.ThreadingTCPServer.__init__(self, *args, **kwargs)
+
+ def change_privs(self):
+ '''
+ Drop our privileges to another user.
+ '''
+ if os.getuid() != 0:
+ return
+
+ pw = pwd.getpwnam(self.user)
+
+ # Change owner on our pid file so it can be removed by us after
+ # dropping privileges. May not exist if not a daemon.
+ if self.pid_file and os.path.exists(self.pid_file):
+ os.chown(self.pid_file, pw.pw_uid, pw.pw_gid)
+
+ os.setgroups([])
+ os.setgid(pw.pw_gid)
+ os.setuid(pw.pw_uid)
+ os.umask(0o022)
+
+ def server_bind(self):
+ '''
+ Overridden from the base class to allow address reuse and to drop
+ privileges after binding to the listening socket.
+ '''
+ self.allow_reuse_address = True
+ socketserver.ThreadingTCPServer.server_bind(self)
+ if self.user:
+ self.change_privs()
+
+ def server_close(self):
+ '''
+ Overridden from base class to shutdown the socket immediately.
+ '''
+ try:
+ self.socket.shutdown(socket.SHUT_RD)
+ self.socket.close()
+ except socket.error as e:
+ # If it's already closed, don't error.
+ if e.errno == socket.EBADF:
+ return
+ raise
+
+ def process_request(self, request, client_address):
+ '''
+ Overridden from the base class to name the thread.
+ '''
+ t = threading.Thread(target=self.process_request_thread,
+ name='socketserver_Thread',
+ args=(request, client_address))
+ t.daemon = self.daemon_threads
+ t.start()
diff --git a/zuul/merger/server.py b/zuul/merger/server.py
index 765d9e0..576d41e 100644
--- a/zuul/merger/server.py
+++ b/zuul/merger/server.py
@@ -19,10 +19,14 @@
import gear
+from zuul.lib import commandsocket
from zuul.lib.config import get_default
from zuul.merger import merger
+COMMANDS = ['stop']
+
+
class MergeServer(object):
log = logging.getLogger("zuul.MergeServer")
@@ -40,9 +44,16 @@
self.merger = merger.Merger(
merge_root, connections, merge_email, merge_name, speed_limit,
speed_time)
+ self.command_map = dict(
+ stop=self.stop)
+ command_socket = get_default(
+ self.config, 'merger', 'command_socket',
+ '/var/lib/zuul/merger.socket')
+ self.command_socket = commandsocket.CommandSocket(command_socket)
def start(self):
self._running = True
+ self._command_running = True
server = self.config.get('gearman', 'server')
port = get_default(self.config, 'gearman', 'port', 4730)
ssl_key = get_default(self.config, 'gearman', 'ssl_key')
@@ -54,6 +65,13 @@
self.worker.waitForServer()
self.log.debug("Registering")
self.register()
+ self.log.debug("Starting command processor")
+ self.command_socket.start()
+ self.command_thread = threading.Thread(
+ target=self.runCommand, name='command')
+ self.command_thread.daemon = True
+ self.command_thread.start()
+
self.log.debug("Starting worker")
self.thread = threading.Thread(target=self.run)
self.thread.daemon = True
@@ -67,12 +85,23 @@
def stop(self):
self.log.debug("Stopping")
self._running = False
+ self._command_running = False
+ self.command_socket.stop()
self.worker.shutdown()
self.log.debug("Stopped")
def join(self):
self.thread.join()
+ def runCommand(self):
+ while self._command_running:
+ try:
+ command = self.command_socket.get().decode('utf8')
+ if command != '_stop':
+ self.command_map[command]()
+ except Exception:
+ self.log.exception("Exception while processing command")
+
def run(self):
self.log.debug("Starting merge listener")
while self._running:
diff --git a/zuul/model.py b/zuul/model.py
index c1e1914..e53a357 100644
--- a/zuul/model.py
+++ b/zuul/model.py
@@ -383,7 +383,7 @@
self.public_ipv4 = None
self.private_ipv4 = None
self.public_ipv6 = None
- self.ssh_port = 22
+ self.connection_port = 22
self._keys = []
self.az = None
self.provider = None
@@ -498,9 +498,10 @@
return n
def addNode(self, node):
- if node.name in self.nodes:
- raise Exception("Duplicate node in %s" % (self,))
- self.nodes[node.name] = node
+ for name in node.name:
+ if name in self.nodes:
+ raise Exception("Duplicate node in %s" % (self,))
+ self.nodes[tuple(node.name)] = node
def getNodes(self):
return list(self.nodes.values())
@@ -858,6 +859,7 @@
source_line=None,
inheritance_path=(),
parent_data=None,
+ description=None,
)
self.inheritable_attributes = {}
@@ -1195,8 +1197,8 @@
if soft:
current_parent_jobs = set()
else:
- raise Exception("Dependent job %s not found: " %
- (dependent_job,))
+ raise Exception("Job %s depends on %s which was not run." %
+ (dependent_job, current_job))
new_parent_jobs = current_parent_jobs - all_parent_jobs
jobs_to_iterate |= new_parent_jobs
all_parent_jobs |= new_parent_jobs
@@ -1862,7 +1864,7 @@
result = build.result
finger_url = build.url
# TODO(tobiash): add support for custom web root
- urlformat = 'static/stream.html?' \
+ urlformat = 'stream.html?' \
'uuid={build.uuid}&' \
'logfile=console.log'
if websocket_url:
@@ -2384,14 +2386,25 @@
r.semaphores = copy.deepcopy(self.semaphores)
return r
- def extend(self, conf):
+ def extend(self, conf, tenant=None):
if isinstance(conf, UnparsedTenantConfig):
self.pragmas.extend(conf.pragmas)
self.pipelines.extend(conf.pipelines)
self.jobs.extend(conf.jobs)
self.project_templates.extend(conf.project_templates)
for k, v in conf.projects.items():
- self.projects.setdefault(k, []).extend(v)
+ name = k
+ # If we have the tenant add the projects to
+ # the according canonical name instead of the given project
+ # name. If it is not found, it's ok to add this to the given
+ # name. We also don't need to throw the
+ # ProjectNotFoundException here as semantic validation occurs
+ # later where it will fail then.
+ if tenant is not None:
+ trusted, project = tenant.getProject(k)
+ if project is not None:
+ name = project.canonical_name
+ self.projects.setdefault(name, []).extend(v)
self.nodesets.extend(conf.nodesets)
self.secrets.extend(conf.secrets)
self.semaphores.extend(conf.semaphores)
@@ -2430,6 +2443,8 @@
class Layout(object):
"""Holds all of the Pipelines."""
+ log = logging.getLogger("zuul.layout")
+
def __init__(self, tenant):
self.uuid = uuid4().hex
self.tenant = tenant
@@ -2540,7 +2555,11 @@
matched = False
for variant in self.getJobs(jobname):
if not variant.changeMatches(change):
+ self.log.debug("Variant %s did not match %s", repr(variant),
+ change)
continue
+ else:
+ self.log.debug("Variant %s matched %s", repr(variant), change)
if not variant.isBase():
parent = variant.parent
if not jobs and parent is None:
@@ -2563,9 +2582,12 @@
for jobname in job_list.jobs:
# This is the final job we are constructing
frozen_job = None
+ self.log.debug("Collecting jobs %s for %s", jobname, change)
try:
variants = self.collectJobs(jobname, change)
except NoMatchingParentError:
+ self.log.debug("No matching parents for job %s and change %s",
+ jobname, change)
variants = None
if not variants:
# A change must match at least one defined job variant
@@ -2587,6 +2609,11 @@
if variant.changeMatches(change):
frozen_job.applyVariant(variant)
matched = True
+ self.log.debug("Pipeline variant %s matched %s",
+ repr(variant), change)
+ else:
+ self.log.debug("Pipeline variant %s did not match %s",
+ repr(variant), change)
if not matched:
# A change must match at least one project pipeline
# job variant.
diff --git a/zuul/rpclistener.py b/zuul/rpclistener.py
index 8c8c783..e5016df 100644
--- a/zuul/rpclistener.py
+++ b/zuul/rpclistener.py
@@ -21,6 +21,7 @@
import gear
from zuul import model
+from zuul.lib import encryption
from zuul.lib.config import get_default
@@ -58,6 +59,8 @@
self.worker.registerFunction("zuul:get_job_log_stream_address")
self.worker.registerFunction("zuul:tenant_list")
self.worker.registerFunction("zuul:status_get")
+ self.worker.registerFunction("zuul:job_list")
+ self.worker.registerFunction("zuul:key_get")
def getFunctions(self):
functions = {}
@@ -283,3 +286,24 @@
args = json.loads(job.arguments)
output = self.sched.formatStatusJSON(args.get("tenant"))
job.sendWorkComplete(output)
+
+ def handle_job_list(self, job):
+ args = json.loads(job.arguments)
+ tenant = self.sched.abide.tenants.get(args.get("tenant"))
+ output = []
+ for job_name in sorted(tenant.layout.jobs):
+ desc = None
+ for tenant_job in tenant.layout.jobs[job_name]:
+ if tenant_job.description:
+ desc = tenant_job.description.split('\n')[0]
+ break
+ output.append({"name": job_name,
+ "description": desc})
+ job.sendWorkComplete(json.dumps(output))
+
+ def handle_key_get(self, job):
+ args = json.loads(job.arguments)
+ tenant = self.sched.abide.tenants.get(args.get("tenant"))
+ (trusted, project) = tenant.getProject(args.get("project"))
+ job.sendWorkComplete(
+ encryption.serialize_rsa_public_key(project.public_key))
diff --git a/zuul/scheduler.py b/zuul/scheduler.py
index 7dee00d..b978979 100644
--- a/zuul/scheduler.py
+++ b/zuul/scheduler.py
@@ -30,10 +30,13 @@
from zuul import exceptions
from zuul import version as zuul_version
from zuul import rpclistener
+from zuul.lib import commandsocket
from zuul.lib.config import get_default
from zuul.lib.statsd import get_statsd
import zuul.lib.queue
+COMMANDS = ['stop']
+
class ManagementEvent(object):
"""An event that should be processed within the main queue run loop"""
@@ -215,6 +218,9 @@
self.wake_event = threading.Event()
self.layout_lock = threading.Lock()
self.run_handler_lock = threading.Lock()
+ self.command_map = dict(
+ stop=self.stop,
+ )
self._pause = False
self._exit = False
self._stopped = False
@@ -243,6 +249,11 @@
time_dir = self._get_time_database_dir()
self.time_database = model.TimeDataBase(time_dir)
+ command_socket = get_default(
+ self.config, 'scheduler', 'command_socket',
+ '/var/lib/zuul/scheduler.socket')
+ self.command_socket = commandsocket.CommandSocket(command_socket)
+
self.zuul_version = zuul_version.version_info.release_string()
self.last_reconfigured = None
self.tenant_last_reconfigured = {}
@@ -250,6 +261,14 @@
def start(self):
super(Scheduler, self).start()
+ self._command_running = True
+ self.log.debug("Starting command processor")
+ self.command_socket.start()
+ self.command_thread = threading.Thread(target=self.runCommand,
+ name='command')
+ self.command_thread.daemon = True
+ self.command_thread.start()
+
self.rpc.start()
self.stats_thread.start()
@@ -261,6 +280,17 @@
self.stats_thread.join()
self.rpc.stop()
self.rpc.join()
+ self._command_running = False
+ self.command_socket.stop()
+
+ def runCommand(self):
+ while self._command_running:
+ try:
+ command = self.command_socket.get().decode('utf8')
+ if command != '_stop':
+ self.command_map[command]()
+ except Exception:
+ self.log.exception("Exception while processing command")
def registerConnections(self, connections, webapp, load=True):
# load: whether or not to trigger the onLoad for the connection. This
diff --git a/zuul/web/__init__.py b/zuul/web/__init__.py
index 766a21d..cefc922 100755
--- a/zuul/web/__init__.py
+++ b/zuul/web/__init__.py
@@ -20,11 +20,14 @@
import logging
import os
import time
+import urllib.parse
import uvloop
import aiohttp
from aiohttp import web
+from sqlalchemy.sql import select
+
import zuul.rpcclient
STATIC_DIR = os.path.join(os.path.dirname(__file__), 'static')
@@ -39,17 +42,6 @@
def setEventLoop(self, event_loop):
self.event_loop = event_loop
- def _getPortLocation(self, job_uuid):
- """
- Query Gearman for the executor running the given job.
-
- :param str job_uuid: The job UUID we want to stream.
- """
- # TODO: Fetch the entire list of uuid/file/server/ports once and
- # share that, and fetch a new list on cache misses perhaps?
- ret = self.rpc.get_job_log_stream_address(job_uuid)
- return ret
-
async def _fingerClient(self, ws, server, port, job_uuid):
"""
Create a client to connect to the finger streamer and pull results.
@@ -91,7 +83,10 @@
# Schedule the blocking gearman work in an Executor
gear_task = self.event_loop.run_in_executor(
- None, self._getPortLocation, request['uuid'])
+ None,
+ self.rpc.get_job_log_stream_address,
+ request['uuid'],
+ )
try:
port_location = await asyncio.wait_for(gear_task, 10)
@@ -162,6 +157,8 @@
self.controllers = {
'tenant_list': self.tenant_list,
'status_get': self.status_get,
+ 'job_list': self.job_list,
+ 'key_get': self.key_get,
}
def tenant_list(self, request):
@@ -182,6 +179,20 @@
resp.last_modified = self.cache_time[tenant]
return resp
+ def job_list(self, request):
+ tenant = request.match_info["tenant"]
+ job = self.rpc.submitJob('zuul:job_list', {'tenant': tenant})
+ resp = web.json_response(json.loads(job.data[0]))
+ resp.headers['Access-Control-Allow-Origin'] = '*'
+ return resp
+
+ def key_get(self, request):
+ tenant = request.match_info["tenant"]
+ project = request.match_info["project"]
+ job = self.rpc.submitJob('zuul:key_get', {'tenant': tenant,
+ 'project': project})
+ return web.Response(body=job.data[0])
+
async def processRequest(self, request, action):
try:
resp = self.controllers[action](request)
@@ -194,6 +205,93 @@
return resp
+class SqlHandler(object):
+ log = logging.getLogger("zuul.web.SqlHandler")
+ filters = ("project", "pipeline", "change", "patchset", "ref",
+ "result", "uuid", "job_name", "voting", "node_name", "newrev")
+
+ def __init__(self, connection):
+ self.connection = connection
+
+ def query(self, args):
+ build = self.connection.zuul_build_table
+ buildset = self.connection.zuul_buildset_table
+ query = select([
+ buildset.c.project,
+ buildset.c.pipeline,
+ buildset.c.change,
+ buildset.c.patchset,
+ buildset.c.ref,
+ buildset.c.newrev,
+ buildset.c.ref_url,
+ build.c.result,
+ build.c.uuid,
+ build.c.job_name,
+ build.c.voting,
+ build.c.node_name,
+ build.c.start_time,
+ build.c.end_time,
+ build.c.log_url]).select_from(build.join(buildset))
+ for table in ('build', 'buildset'):
+ for k, v in args['%s_filters' % table].items():
+ if table == 'build':
+ column = build.c
+ else:
+ column = buildset.c
+ query = query.where(getattr(column, k).in_(v))
+ return query.limit(args['limit']).offset(args['skip']).order_by(
+ build.c.id.desc())
+
+ def get_builds(self, args):
+ """Return a list of build"""
+ builds = []
+ with self.connection.engine.begin() as conn:
+ query = self.query(args)
+ for row in conn.execute(query):
+ build = dict(row)
+ # Convert date to iso format
+ if row.start_time:
+ build['start_time'] = row.start_time.strftime(
+ '%Y-%m-%dT%H:%M:%S')
+ if row.end_time:
+ build['end_time'] = row.end_time.strftime(
+ '%Y-%m-%dT%H:%M:%S')
+ # Compute run duration
+ if row.start_time and row.end_time:
+ build['duration'] = (row.end_time -
+ row.start_time).total_seconds()
+ builds.append(build)
+ return builds
+
+ async def processRequest(self, request):
+ try:
+ args = {
+ 'buildset_filters': {},
+ 'build_filters': {},
+ 'limit': 50,
+ 'skip': 0,
+ }
+ for k, v in urllib.parse.parse_qsl(request.rel_url.query_string):
+ if k in ("tenant", "project", "pipeline", "change",
+ "patchset", "ref", "newrev"):
+ args['buildset_filters'].setdefault(k, []).append(v)
+ elif k in ("uuid", "job_name", "voting", "node_name",
+ "result"):
+ args['build_filters'].setdefault(k, []).append(v)
+ elif k in ("limit", "skip"):
+ args[k] = int(v)
+ else:
+ raise ValueError("Unknown parameter %s" % k)
+ data = self.get_builds(args)
+ resp = web.json_response(data)
+ resp.headers['Access-Control-Allow-Origin'] = '*'
+ except Exception as e:
+ self.log.exception("Jobs exception:")
+ resp = web.json_response({'error_description': 'Internal error'},
+ status=500)
+ return resp
+
+
class ZuulWeb(object):
log = logging.getLogger("zuul.web.ZuulWeb")
@@ -201,7 +299,8 @@
def __init__(self, listen_address, listen_port,
gear_server, gear_port,
ssl_key=None, ssl_cert=None, ssl_ca=None,
- static_cache_expiry=3600):
+ static_cache_expiry=3600,
+ sql_connection=None):
self.listen_address = listen_address
self.listen_port = listen_port
self.event_loop = None
@@ -212,6 +311,10 @@
ssl_key, ssl_cert, ssl_ca)
self.log_streaming_handler = LogStreamingHandler(self.rpc)
self.gearman_handler = GearmanHandler(self.rpc)
+ if sql_connection:
+ self.sql_handler = SqlHandler(sql_connection)
+ else:
+ self.sql_handler = None
async def _handleWebsocket(self, request):
return await self.log_streaming_handler.processRequest(
@@ -224,12 +327,27 @@
async def _handleStatusRequest(self, request):
return await self.gearman_handler.processRequest(request, 'status_get')
+ async def _handleJobsRequest(self, request):
+ return await self.gearman_handler.processRequest(request, 'job_list')
+
+ async def _handleSqlRequest(self, request):
+ return await self.sql_handler.processRequest(request)
+
+ async def _handleKeyRequest(self, request):
+ return await self.gearman_handler.processRequest(request, 'key_get')
+
async def _handleStaticRequest(self, request):
fp = None
if request.path.endswith("tenants.html") or request.path.endswith("/"):
fp = os.path.join(STATIC_DIR, "index.html")
elif request.path.endswith("status.html"):
fp = os.path.join(STATIC_DIR, "status.html")
+ elif request.path.endswith("jobs.html"):
+ fp = os.path.join(STATIC_DIR, "jobs.html")
+ elif request.path.endswith("builds.html"):
+ fp = os.path.join(STATIC_DIR, "builds.html")
+ elif request.path.endswith("stream.html"):
+ fp = os.path.join(STATIC_DIR, "stream.html")
headers = {}
if self.static_cache_expiry:
headers['Cache-Control'] = "public, max-age=%d" % \
@@ -248,14 +366,24 @@
is run within a separate (non-main) thread.
"""
routes = [
- ('GET', '/console-stream', self._handleWebsocket),
('GET', '/tenants.json', self._handleTenantsRequest),
('GET', '/{tenant}/status.json', self._handleStatusRequest),
+ ('GET', '/{tenant}/jobs.json', self._handleJobsRequest),
+ ('GET', '/{tenant}/console-stream', self._handleWebsocket),
+ ('GET', '/{tenant}/{project:.*}.pub', self._handleKeyRequest),
('GET', '/{tenant}/status.html', self._handleStaticRequest),
+ ('GET', '/{tenant}/jobs.html', self._handleStaticRequest),
+ ('GET', '/{tenant}/stream.html', self._handleStaticRequest),
('GET', '/tenants.html', self._handleStaticRequest),
('GET', '/', self._handleStaticRequest),
]
+ if self.sql_handler:
+ routes.append(('GET', '/{tenant}/builds.json',
+ self._handleSqlRequest))
+ routes.append(('GET', '/{tenant}/builds.html',
+ self._handleStaticRequest))
+
self.log.debug("ZuulWeb starting")
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
user_supplied_loop = loop is not None
diff --git a/zuul/web/static/README b/zuul/web/static/README
index f17ea5f..e924dc7 100644
--- a/zuul/web/static/README
+++ b/zuul/web/static/README
@@ -50,8 +50,7 @@
</Directory>
# Console-stream needs a special proxy-pass for websocket
- ProxyPass /console-stream ws://localhost:9000/console-stream nocanon retry=0
- ProxyPassReverse /console-stream ws://localhost:9000/console-stream
+ ProxyPassMatch /(.*)/console-stream ws://localhost:9000/$1/console-stream nocanon retry=0
# Then only the json calls are sent to the zuul-web endpoints
ProxyPassMatch ^/(.*.json)$ http://localhost:9000/$1 nocanon retry=0
diff --git a/zuul/web/static/builds.html b/zuul/web/static/builds.html
new file mode 100644
index 0000000..ace1e0a
--- /dev/null
+++ b/zuul/web/static/builds.html
@@ -0,0 +1,76 @@
+<!--
+Copyright 2017 Red Hat
+
+Licensed under the Apache License, Version 2.0 (the "License"); you may
+not use this file except in compliance with the License. You may obtain
+a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+License for the specific language governing permissions and limitations
+under the License.
+-->
+<!DOCTYPE html>
+<html>
+<head>
+ <title>Zuul Builds</title>
+ <link rel="stylesheet" href="../static/bootstrap/css/bootstrap.min.css">
+ <link rel="stylesheet" href="../static/styles/zuul.css" />
+ <script src="../static/js/jquery.min.js"></script>
+ <script src="../static/js/angular.min.js"></script>
+ <script src="../static/javascripts/zuul.angular.js"></script>
+</head>
+<body ng-app="zuulBuilds" ng-controller="mainController"><div class="container-fluid">
+ <nav class="navbar navbar-default">
+ <div class="container-fluid">
+ <div class="navbar-header">
+ <a class="navbar-brand" href="../" target="_self">Zuul Dashboard</a>
+ </div>
+ <ul class="nav navbar-nav">
+ <li><a href="status.html" target="_self">Status</a></li>
+ <li><a href="jobs.html" target="_self">Jobs</a></li>
+ <li class="active"><a href="builds.html" target="_self">Builds</a></li>
+ </ul>
+ <span style="float: right; margin-top: 7px;">
+ <form ng-submit="builds_fetch()">
+ <label>Pipeline:</label>
+ <input name="pipeline" ng-model="pipeline" />
+ <label>Job:</label>
+ <input name="job_name" ng-model="job_name" />
+ <label>Project:</label>
+ <input name="project" ng-model="project" />
+ <input type="submit" value="Refresh" />
+ </form>
+ </span>
+ </div>
+ </nav>
+ <table class="table table-hover table-condensed">
+ <thead>
+ <tr>
+ <th>Job</th>
+ <th>Project</th>
+ <th>Pipeline</th>
+ <th>Change</th>
+ <th>Duration</th>
+ <th>Log url</th>
+ <th>Start time</th>
+ <th>Result</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr ng-repeat="build in builds" ng-class="rowClass(build)">
+ <td>{{ build.job_name }}</td>
+ <td>{{ build.project }}</td>
+ <td>{{ build.pipeline }}</td>
+ <td><a href="{{ build.ref_url }}" target="_self">change</a></td>
+ <td>{{ build.duration }} seconds</td>
+ <td><a ng-if="build.log_url" href="{{ build.log_url }}" target="_self">logs</a></td>
+ <td>{{ build.start_time }}</td>
+ <td>{{ build.result }}</td>
+ </tr>
+ </tbody>
+ </table>
+</div></body></html>
diff --git a/zuul/web/static/index.html b/zuul/web/static/index.html
index 6747e66..d20a1ea 100644
--- a/zuul/web/static/index.html
+++ b/zuul/web/static/index.html
@@ -17,10 +17,10 @@
<html>
<head>
<title>Zuul Tenants</title>
- <link rel="stylesheet" href="/static/bootstrap/css/bootstrap.min.css">
+ <link rel="stylesheet" href="static/bootstrap/css/bootstrap.min.css">
<link rel="stylesheet" href="static/styles/zuul.css" />
- <script src="/static/js/jquery.min.js"></script>
- <script src="/static/js/angular.min.js"></script>
+ <script src="static/js/jquery.min.js"></script>
+ <script src="static/js/angular.min.js"></script>
<script src="static/javascripts/zuul.angular.js"></script>
</head>
<body ng-app="zuulTenants" ng-controller="mainController"><div class="container-fluid">
diff --git a/zuul/web/static/javascripts/zuul.angular.js b/zuul/web/static/javascripts/zuul.angular.js
index 3152fc0..87cbbdd 100644
--- a/zuul/web/static/javascripts/zuul.angular.js
+++ b/zuul/web/static/javascripts/zuul.angular.js
@@ -30,3 +30,70 @@
}
$scope.tenants_fetch();
});
+
+angular.module('zuulJobs', []).controller(
+ 'mainController', function($scope, $http)
+{
+ $scope.jobs = undefined;
+ $scope.jobs_fetch = function() {
+ $http.get("jobs.json")
+ .then(function success(result) {
+ $scope.jobs = result.data;
+ });
+ }
+ $scope.jobs_fetch();
+});
+
+angular.module('zuulBuilds', [], function($locationProvider) {
+ $locationProvider.html5Mode({
+ enabled: true,
+ requireBase: false
+ });
+}).controller('mainController', function($scope, $http, $location)
+{
+ $scope.rowClass = function(build) {
+ if (build.result == "SUCCESS") {
+ return "success";
+ } else {
+ return "warning";
+ }
+ };
+ var query_args = $location.search();
+ var url = $location.url();
+ var tenant_start = url.lastIndexOf(
+ '/', url.lastIndexOf('/builds.html') - 1) + 1;
+ var tenant_length = url.lastIndexOf('/builds.html') - tenant_start;
+ $scope.tenant = url.substr(tenant_start, tenant_length);
+ $scope.builds = undefined;
+ if (query_args["pipeline"]) {$scope.pipeline = query_args["pipeline"];
+ } else {$scope.pipeline = "";}
+ if (query_args["job_name"]) {$scope.job_name = query_args["job_name"];
+ } else {$scope.job_name = "";}
+ if (query_args["project"]) {$scope.project = query_args["project"];
+ } else {$scope.project = "";}
+ $scope.builds_fetch = function() {
+ query_string = "";
+ if ($scope.tenant) {query_string += "&tenant="+$scope.tenant;}
+ if ($scope.pipeline) {query_string += "&pipeline="+$scope.pipeline;}
+ if ($scope.job_name) {query_string += "&job_name="+$scope.job_name;}
+ if ($scope.project) {query_string += "&project="+$scope.project;}
+ if (query_string != "") {query_string = "?" + query_string.substr(1);}
+ $http.get("builds.json" + query_string)
+ .then(function success(result) {
+ for (build_pos = 0;
+ build_pos < result.data.length;
+ build_pos += 1) {
+ build = result.data[build_pos]
+ if (build.node_name == null) {
+ build.node_name = 'master'
+ }
+ /* Fix incorect url for post_failure job */
+ if (build.log_url == build.job_name) {
+ build.log_url = undefined;
+ }
+ }
+ $scope.builds = result.data;
+ });
+ }
+ $scope.builds_fetch()
+});
diff --git a/zuul/web/static/javascripts/zuul.app.js b/zuul/web/static/javascripts/zuul.app.js
index 7ceb2dd..bf90a4d 100644
--- a/zuul/web/static/javascripts/zuul.app.js
+++ b/zuul/web/static/javascripts/zuul.app.js
@@ -28,8 +28,6 @@
function zuul_build_dom($, container) {
// Build a default-looking DOM
var default_layout = '<div class="container">'
- + '<h1>Zuul Status</h1>'
- + '<p>Real-time status monitor of Zuul, the pipeline manager between Gerrit and Workers.</p>'
+ '<div class="zuul-container" id="zuul-container">'
+ '<div style="display: none;" class="alert" id="zuul_msg"></div>'
+ '<button class="btn pull-right zuul-spinner">updating <span class="glyphicon glyphicon-refresh"></span></button>'
diff --git a/zuul/web/static/jobs.html b/zuul/web/static/jobs.html
new file mode 100644
index 0000000..b27d882
--- /dev/null
+++ b/zuul/web/static/jobs.html
@@ -0,0 +1,55 @@
+<!--
+Copyright 2017 Red Hat
+
+Licensed under the Apache License, Version 2.0 (the "License"); you may
+not use this file except in compliance with the License. You may obtain
+a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+License for the specific language governing permissions and limitations
+under the License.
+-->
+<!DOCTYPE html>
+<html>
+<head>
+ <title>Zuul Builds</title>
+ <link rel="stylesheet" href="../static/bootstrap/css/bootstrap.min.css">
+ <link rel="stylesheet" href="../static/styles/zuul.css" />
+ <script src="../static/js/jquery.min.js"></script>
+ <script src="../static/js/angular.min.js"></script>
+ <script src="../static/javascripts/zuul.angular.js"></script>
+</head>
+<body ng-app="zuulJobs" ng-controller="mainController"><div class="container-fluid">
+ <nav class="navbar navbar-default">
+ <div class="container-fluid">
+ <div class="navbar-header">
+ <a class="navbar-brand" href="../" target="_self">Zuul Dashboard</a>
+ </div>
+ <ul class="nav navbar-nav">
+ <li><a href="status.html" target="_self">Status</a></li>
+ <li class="active"><a href="jobs.html" target="_self">Jobs</a></li>
+ <li><a href="builds.html" target="_self">Builds</a></li>
+ </ul>
+ </div>
+ </nav>
+ <table class="table table-hover table-condensed">
+ <thead>
+ <tr>
+ <th>Name</th>
+ <th>Description</th>
+ <th>Last builds</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr ng-repeat="job in jobs">
+ <td>{{ job.name }}</td>
+ <td>{{ job.description }}</td>
+ <td><a href="builds.html?job_name={{ job.name }}">builds</a></td>
+ </tr>
+ </tbody>
+ </table>
+</div></body></html>
diff --git a/zuul/web/static/status.html b/zuul/web/static/status.html
index 7cb9536..8471fd1 100644
--- a/zuul/web/static/status.html
+++ b/zuul/web/static/status.html
@@ -19,11 +19,11 @@
<html>
<head>
<title>Zuul Status</title>
- <link rel="stylesheet" href="/static/bootstrap/css/bootstrap.min.css">
+ <link rel="stylesheet" href="../static/bootstrap/css/bootstrap.min.css">
<link rel="stylesheet" href="../static/styles/zuul.css" />
- <script src="/static/js/jquery.min.js"></script>
- <script src="/static/js/jquery-visibility.min.js"></script>
- <script src="/static/js/jquery.graphite.min.js"></script>
+ <script src="../static/js/jquery.min.js"></script>
+ <script src="../static/js/jquery-visibility.min.js"></script>
+ <script src="../static/js/jquery.graphite.min.js"></script>
<script src="../static/javascripts/jquery.zuul.js"></script>
<script src="../static/javascripts/zuul.app.js"></script>
</head>
diff --git a/zuul/web/static/stream.html b/zuul/web/static/stream.html
index dbeb66b..f2e7081 100644
--- a/zuul/web/static/stream.html
+++ b/zuul/web/static/stream.html
@@ -73,7 +73,7 @@
} else {
protocol = 'ws://';
}
- path = url['pathname'].replace(/static\/.*$/g, '') + 'console-stream';
+ path = url['pathname'].replace(/stream.html.*$/g, '') + 'console-stream';
params['websocket_url'] = protocol + url['host'] + path;
}
var ws = new WebSocket(params['websocket_url']);