Merge branch 'master' into feature/zuulv3

Change-Id: I37a3c5d4f12917b111b7eb624f8b68689687ebc4
diff --git a/.gitignore b/.gitignore
index e76a1bd..d6a7477 100644
--- a/.gitignore
+++ b/.gitignore
@@ -10,7 +10,6 @@
 AUTHORS
 build/*
 ChangeLog
-config
 doc/build/*
 zuul/versioninfo
 dist/
diff --git a/.gitreview b/.gitreview
index 665adb6..9ba1bdc 100644
--- a/.gitreview
+++ b/.gitreview
@@ -2,3 +2,4 @@
 host=review.openstack.org
 port=29418
 project=openstack-infra/zuul.git
+defaultbranch=feature/zuulv3
diff --git a/.testr.conf b/.testr.conf
index 8ef6689..7e8d028 100644
--- a/.testr.conf
+++ b/.testr.conf
@@ -1,4 +1,4 @@
 [DEFAULT]
-test_command=OS_LOG_LEVEL=${OS_LOG_LEVEL:-INFO} OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} OS_LOG_DEFAULTS=${OS_LOG_DEFAULTS:-""} ${PYTHON:-python} -m subunit.run discover -t ./ tests $LISTOPT $IDOPTION
+test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} OS_LOG_DEFAULTS=${OS_LOG_DEFAULTS:-""} ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./tests/unit} $LISTOPT $IDOPTION
 test_id_option=--load-list $IDFILE
 test_list_option=--list
diff --git a/.zuul.yaml b/.zuul.yaml
new file mode 100644
index 0000000..bb9a96d
--- /dev/null
+++ b/.zuul.yaml
@@ -0,0 +1,15 @@
+- job:
+    name: python-linters
+    pre-run: pre
+    post-run: post
+    success-url: http://zuulv3-dev.openstack.org/logs/{build.uuid}/
+    failure-url: http://zuulv3-dev.openstack.org/logs/{build.uuid}/
+    nodes:
+      - name: worker
+        image: ubuntu-xenial
+
+- project:
+    name: openstack-infra/zuul
+    check:
+      jobs:
+        - python-linters
diff --git a/README.rst b/README.rst
index 90e00a5..697d994 100644
--- a/README.rst
+++ b/README.rst
@@ -3,6 +3,10 @@
 
 Zuul is a project gating system developed for the OpenStack Project.
 
+We are currently engaged in a significant development effort in
+preparation for the third major version of Zuul.  We call this effort
+`Zuul v3`_ and it is described in more detail below.
+
 Contributing
 ------------
 
@@ -25,3 +29,116 @@
     # Do your commits
     $ git review
     # Enter your username if prompted
+
+Zuul v3
+-------
+
+The Zuul v3 effort involves significant changes to Zuul, and its
+companion program, Nodepool.  The intent is for Zuul to become more
+generally useful outside of the OpenStack community.  This is the best
+way to get started with this effort:
+
+1) Read the Zuul v3 spec: http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html
+
+   We use specification documents like this to describe large efforts
+   where we want to make sure that all the participants are in
+   agreement about what will happen and generally how before starting
+   development.  These specs should contain enough information for
+   people to evaluate the proposal generally, and sometimes include
+   specific details that need to be agreed upon in advance.  They are
+   living documents which can change as work gets underway.  However,
+   every change or detail does not need to be reflected in the spec --
+   most work is simply done with patches (and revised if necessary in
+   code review).
+
+2) Read the Nodepool build-workers spec: http://specs.openstack.org/openstack-infra/infra-specs/specs/nodepool-zookeeper-workers.html
+
+3) Review any proposed updates to these specs: https://review.openstack.org/#/q/status:open+project:openstack-infra/infra-specs+topic:zuulv3
+
+   Some of the information in the specs may be effectively superceded
+   by changes here, which are still undergoing review.
+
+4) Read documentation on the internal data model and testing: http://docs.openstack.org/infra/zuul/feature/zuulv3/internals.html
+
+   The general philosophy for Zuul tests is to perform functional
+   testing of either the individual component or the entire end-to-end
+   system with external systems (such as Gerrit) replaced with fakes.
+   Before adding additional unit tests with a narrower focus, consider
+   whether they add value to this system or are merely duplicative of
+   functional tests.
+
+5) Review open changes: https://review.openstack.org/#/q/status:open+branch:feature/zuulv3
+
+   We find that the most valuable code reviews are ones that spot
+   problems with the proposed change, or raise questions about how
+   that might affect other systems or subsequent work.  It is also a
+   great way to stay involved as a team in work performed by others
+   (for instance, by observing and asking questions about development
+   while it is in progress).  We try not to sweat the small things and
+   don't worry too much about style suggestions or other nitpicky
+   things (unless they are relevant -- for instance, a -1 vote on a
+   change that introduces a yaml change out of character with existing
+   conventions is useful because it makes the system more
+   user-friendly; a -1 vote on a change which uses a sub-optimal line
+   breaking strategy is probably not the best use of anyone's time).
+
+6) Join #zuul on Freenode.  Let others (especially jeblair who is
+   trying to coordinate and prioritize work) know what you would like
+   to work on.
+
+7) Check storyboard for status of current work items: https://storyboard.openstack.org/#!/board/41
+
+Once you are up to speed on those items, it will be helpful to know
+the following:
+
+* Zuul v3 includes some substantial changes to Zuul, and in order to
+  implement them quickly and simultaneously, we temporarily disabled
+  most of the test suite.  That test suite still has relevance, but
+  tests are likely to need updating individually, with reasons ranging
+  from something simple such as a test-framework method changing its
+  name, to more substantial issues, such as a feature being removed as
+  part of the v3 work.  Each test will need to be evaluated
+  individually.  Feel free to, at any time, claim a test name in this
+  story and work on re-enabling it:
+  https://storyboard.openstack.org/#!/story/2000773
+
+* Because of the importance of external systems, as well as the number
+  of internal Zuul components, actually running Zuul in a development
+  mode quickly becomes unweildy (imagine uploading changes to Gerrit
+  repeatedly while altering Zuul source code).  Instead, the best way
+  to develop with Zuul is in fact to write a functional test.
+  Construct a test to fully simulate the series of events you want to
+  see, then run it in the foreground.  For example::
+
+    .tox/py27/bin/python -m testtools.run tests.test_scheduler.TestScheduler.test_jobs_launched
+
+  See TESTING.rst for more information.
+
+* There are many occasions, when working on sweeping changes to Zuul
+  v3, we left notes for future work items in the code marked with
+  "TODOv3".  These represent potentially serious missing functionality
+  or other issues which must be resolved before an initial v3 release
+  (unlike a more conventional TODO note, these really can not be left
+  indefinitely).  These present an opportunity to identify work items
+  not otherwise tracked.  The names associated with TODO or TODOv3
+  items do not mean that only that person can address them -- they
+  simply reflect who to ask to explain the item in more detail if it
+  is too cryptic.  In your own work, feel free to leave TODOv3 notes
+  if a change would otherwise become too large or unweildy.
+
+Roadmap
+-------
+
+* Implement Zookeeper for Nodepool builders and begin using this in
+  OpenStack Infra
+* Implement Zookeeper for Nodepool launchers
+* Implement a shim to translate Zuul v2 demand into Nodepool Zookeeper
+  launcher requests
+* Begin using Zookeeper based Nodepool launchers with Zuul v2.5 in
+  OpenStack Infra
+* Begin using Zuul v3 to run jobs for Zuul itself
+* Move OpenStack Infra to use Zuul v3
+* Implement Github support
+* Begin using Zuul v3 to run tests on Ansible repos
+* Implement support in Nodepool for non-OpenStack clouds
+* Add native container support to Zuul / Nodepool
diff --git a/TESTING.rst b/TESTING.rst
index 56f2fbb..d2cd4c1 100644
--- a/TESTING.rst
+++ b/TESTING.rst
@@ -17,6 +17,16 @@
 
   pip install tox
 
+As of zuul v3, a running zookeeper is required to execute tests.
+
+*Install zookeeper*::
+
+  [apt-get | yum] install zookeeperd
+
+*Start zookeeper*::
+
+  service zookeeper start
+
 Run The Tests
 -------------
 
@@ -54,12 +64,12 @@
 
 For example, to *run the basic Zuul test*::
 
-  tox -e py27 -- tests.test_scheduler.TestScheduler.test_jobs_launched
+  tox -e py27 -- tests.unit.test_scheduler.TestScheduler.test_jobs_launched
 
 To *run one test in the foreground* (after previously having run tox
 to set up the virtualenv)::
 
-  .tox/py27/bin/python -m testtools.run tests.test_scheduler.TestScheduler.test_jobs_launched
+  .tox/py27/bin/python -m testtools.run tests.unit.test_scheduler.TestScheduler.test_jobs_launched
 
 List Failing Tests
 ------------------
diff --git a/bindep.txt b/bindep.txt
index 32c750a..8d8c45b 100644
--- a/bindep.txt
+++ b/bindep.txt
@@ -4,3 +4,4 @@
 mysql-client [test]
 mysql-server [test]
 libjpeg-dev [test]
+zookeeperd [platform:dpkg]
diff --git a/doc/source/client.rst b/doc/source/client.rst
index 5fe2252..6b62360 100644
--- a/doc/source/client.rst
+++ b/doc/source/client.rst
@@ -28,7 +28,7 @@
 
 Example::
 
-  zuul enqueue --trigger gerrit --pipeline check --project example_project --change 12345,1
+  zuul enqueue --tenant openstack --trigger gerrit --pipeline check --project example_project --change 12345,1
 
 Note that the format of change id is <number>,<patchset>.
 
@@ -38,7 +38,7 @@
 
 Example::
 
-  zuul promote --pipeline check --changes 12345,1 13336,3
+  zuul promote --tenant openstack --pipeline check --changes 12345,1 13336,3
 
 Note that the format of changes id is <number>,<patchset>.
 
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 9e0d2c7..f8ae368 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -25,7 +25,11 @@
 
 # Add any Sphinx extension module names here, as strings. They can be extensions
 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = [ 'sphinxcontrib.blockdiag', 'sphinxcontrib.programoutput' ]
+extensions = [
+    'sphinx.ext.autodoc',
+    'sphinxcontrib.blockdiag',
+    'sphinxcontrib.programoutput'
+]
 #extensions = ['sphinx.ext.intersphinx']
 #intersphinx_mapping = {'python': ('http://docs.python.org/2.7', None)}
 
diff --git a/doc/source/datamodel.rst b/doc/source/datamodel.rst
new file mode 100644
index 0000000..9df6505
--- /dev/null
+++ b/doc/source/datamodel.rst
@@ -0,0 +1,78 @@
+Data Model
+==========
+
+It all starts with the :py:class:`~zuul.model.Pipeline`. A Pipeline is the
+basic organizational structure that everything else hangs off.
+
+.. autoclass:: zuul.model.Pipeline
+
+Pipelines have a configured
+:py:class:`~zuul.manager.PipelineManager` which controlls how
+the :py:class:`Change <zuul.model.Changeish>` objects are enqueued and
+processed.
+
+There are currently two,
+:py:class:`~zuul.manager.dependent.DependentPipelineManager` and
+:py:class:`~zuul.manager.independent.IndependentPipelineManager`
+
+.. autoclass:: zuul.manager.PipelineManager
+.. autoclass:: zuul.manager.dependent.DependentPipelineManager
+.. autoclass:: zuul.manager.independent.IndependentPipelineManager
+
+A :py:class:`~zuul.model.Pipeline` has one or more
+:py:class:`~zuul.model.ChangeQueue` objects.
+
+.. autoclass:: zuul.model.ChangeQueue
+
+A :py:class:`~zuul.model.Job` represents the definition of what to do. A
+:py:class:`~zuul.model.Build` represents a single run of a
+:py:class:`~zuul.model.Job`. A :py:class:`~zuul.model.JobTree` is used to
+encapsulate the dependencies between one or more :py:class:`~zuul.model.Job`
+objects.
+
+.. autoclass:: zuul.model.Job
+.. autoclass:: zuul.model.JobTree
+.. autoclass:: zuul.model.Build
+
+The :py:class:`~zuul.manager.base.PipelineManager` enqueues each
+:py:class:`Change <zuul.model.Changeish>` into the
+:py:class:`~zuul.model.ChangeQueue` in a :py:class:`~zuul.model.QueueItem`.
+
+.. autoclass:: zuul.model.QueueItem
+
+As the Changes are processed, each :py:class:`~zuul.model.Build` is put into
+a :py:class:`~zuul.model.BuildSet`
+
+.. autoclass:: zuul.model.BuildSet
+
+Changes
+~~~~~~~
+
+.. autoclass:: zuul.model.Changeish
+.. autoclass:: zuul.model.Change
+.. autoclass:: zuul.model.Ref
+
+Filters
+~~~~~~~
+
+.. autoclass:: zuul.model.ChangeishFilter
+.. autoclass:: zuul.model.EventFilter
+
+
+Tenants
+~~~~~~~
+
+An abide is a collection of tenants.
+
+.. autoclass:: zuul.model.Tenant
+.. autoclass:: zuul.model.UnparsedAbideConfig
+.. autoclass:: zuul.model.UnparsedTenantConfig
+
+Other Global Objects
+~~~~~~~~~~~~~~~~~~~~
+
+.. autoclass:: zuul.model.Project
+.. autoclass:: zuul.model.Layout
+.. autoclass:: zuul.model.RepoFiles
+.. autoclass:: zuul.model.Worker
+.. autoclass:: zuul.model.TriggerEvent
diff --git a/doc/source/developer.rst b/doc/source/developer.rst
new file mode 100644
index 0000000..527ea6e
--- /dev/null
+++ b/doc/source/developer.rst
@@ -0,0 +1,15 @@
+Developer's Guide
+=================
+
+This section contains information for Developers who wish to work on
+Zuul itself.  This information is not necessary for the operation of
+Zuul, though advanced users may find it interesting.
+
+.. autoclass:: zuul.scheduler.Scheduler
+
+.. toctree::
+   :maxdepth: 1
+
+   datamodel
+   drivers
+   testing
diff --git a/doc/source/drivers.rst b/doc/source/drivers.rst
new file mode 100644
index 0000000..6588381
--- /dev/null
+++ b/doc/source/drivers.rst
@@ -0,0 +1,20 @@
+Drivers
+=======
+
+Zuul provides an API for extending its functionality to interact with
+other systems.
+
+.. autoclass:: zuul.driver.Driver
+   :members:
+
+.. autoclass:: zuul.driver.ConnectionInterface
+   :members:
+
+.. autoclass:: zuul.driver.SourceInterface
+   :members:
+
+.. autoclass:: zuul.driver.TriggerInterface
+   :members:
+
+.. autoclass:: zuul.driver.ReporterInterface
+   :members:
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 3c793da..784fc4d 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -24,6 +24,7 @@
    launchers
    statsd
    client
+   developer
 
 Indices and tables
 ==================
diff --git a/doc/source/launchers.rst b/doc/source/launchers.rst
index f368cb9..8a8c932 100644
--- a/doc/source/launchers.rst
+++ b/doc/source/launchers.rst
@@ -273,9 +273,7 @@
   build:FUNCTION_NAME:NODE_NAME
 
 where **NODE_NAME** is the name or class of node on which the job
-should be run.  This can be specified by setting the ZUUL_NODE
-parameter in a parameter-function (see :ref:`includes` section in
-:ref:`zuulconf`).
+should be run.
 
 Zuul sends the ZUUL_* parameters described in `Zuul Parameters`_
 encoded in JSON format as the argument included with the
@@ -362,24 +360,3 @@
 
 The original job is expected to complete with a WORK_DATA and
 WORK_FAIL packet as described in `Starting Builds`_.
-
-Build Descriptions
-^^^^^^^^^^^^^^^^^^
-
-In order to update the job running system with a description of the
-current state of all related builds, the job runner may optionally
-implement the following Gearman function:
-
-  set_description:MANAGER_NAME
-
-Where **MANAGER_NAME** is used as described in `Stopping Builds`_.
-The argument to the function is the following encoded in JSON format:
-
-**name**
-  The job name of the build to describe.
-
-**number**
-  The build number of the build to describe.
-
-**html_description**
-  The description of the build in HTML format.
diff --git a/doc/source/testing.rst b/doc/source/testing.rst
new file mode 100644
index 0000000..092754f
--- /dev/null
+++ b/doc/source/testing.rst
@@ -0,0 +1,29 @@
+Testing
+=======
+
+Zuul provides an extensive framework for performing functional testing
+on the system from end-to-end with major external components replaced
+by fakes for ease of use and speed.
+
+Test classes that subclass :py:class:`~tests.base.ZuulTestCase` have
+access to a number of attributes useful for manipulating or inspecting
+the environment being simulated in the test:
+
+.. autoclass:: tests.base.ZuulTestCase
+   :members:
+
+.. autoclass:: tests.base.FakeGerritConnection
+   :members:
+   :inherited-members:
+
+.. autoclass:: tests.base.FakeGearmanServer
+   :members:
+
+.. autoclass:: tests.base.RecordingLaunchServer
+   :members:
+
+.. autoclass:: tests.base.FakeBuild
+   :members:
+
+.. autoclass:: tests.base.BuildHistory
+   :members:
diff --git a/doc/source/zuul.rst b/doc/source/zuul.rst
index e8279d9..8906dac 100644
--- a/doc/source/zuul.rst
+++ b/doc/source/zuul.rst
@@ -52,11 +52,6 @@
   Port on which the Gearman server is listening.
   ``port=4730`` (optional)
 
-**check_job_registration**
-  Check to see if job is registered with Gearman or not. When True
-  a build result of NOT_REGISTERED will be return if job is not found.
-  ``check_job_registration=True``
-
 gearman_server
 """"""""""""""
 
@@ -266,26 +261,6 @@
 Zuul should perform.  There are three sections: pipelines, jobs, and
 projects.
 
-.. _includes:
-
-Includes
-""""""""
-
-Custom functions to be used in Zuul's configuration may be provided
-using the ``includes`` directive.  It accepts a list of files to
-include, and currently supports one type of inclusion, a python file::
-
-  includes:
-    - python-file: local_functions.py
-
-**python-file**
-  The path to a python file (either an absolute path or relative to the
-  directory name of :ref:`layout_config <layout_config>`).  The
-  file will be loaded and objects that it defines will be placed in a
-  special environment which can be referenced in the Zuul configuration.
-  Currently only the parameter-function attribute of a Job uses this
-  feature.
-
 Pipelines
 """""""""
 
@@ -810,33 +785,6 @@
 
 **tags (optional)**
   A list of arbitrary strings which will be associated with the job.
-  Can be used by the parameter-function to alter behavior based on
-  their presence on a job.  If the job name is a regular expression,
-  tags will accumulate on jobs that match.
-
-**parameter-function (optional)**
-  Specifies a function that should be applied to the parameters before
-  the job is launched.  The function should be defined in a python file
-  included with the :ref:`includes` directive.  The function
-  should have the following signature:
-
-  .. function:: parameters(item, job, parameters)
-
-     Manipulate the parameters passed to a job before a build is
-     launched.  The ``parameters`` dictionary will already contain the
-     standard Zuul job parameters, and is expected to be modified
-     in-place.
-
-     :param item: the current queue item
-     :type item: zuul.model.QueueItem
-     :param job: the job about to be run
-     :type job: zuul.model.Job
-     :param parameters: parameters to be passed to the job
-     :type parameters: dict
-
-  If the parameter **ZUUL_NODE** is set by this function, then it will
-  be used to specify on what node (or class of node) the job should be
-  run.
 
 **swift**
   If :ref:`swift` is configured then each job can define a destination
diff --git a/etc/zuul.conf-sample b/etc/zuul.conf-sample
index 9998a70..7207c73 100644
--- a/etc/zuul.conf-sample
+++ b/etc/zuul.conf-sample
@@ -10,6 +10,7 @@
 pidfile=/var/run/zuul/zuul.pid
 state_dir=/var/lib/zuul
 status_url=https://jenkins.example.com/zuul/status
+zookeeper_hosts=127.0.0.1:2181
 
 [merger]
 git_dir=/var/lib/zuul/git
diff --git a/playbooks/post.yaml b/playbooks/post.yaml
new file mode 100644
index 0000000..a11e50a
--- /dev/null
+++ b/playbooks/post.yaml
@@ -0,0 +1,19 @@
+- hosts: all
+  tasks:
+    - name: Collect console log.
+      synchronize:
+        dest: "{{ zuul.launcher.log_root }}"
+        mode: pull
+        src: "/tmp/console.log"
+
+    - name: Collect tox logs.
+      synchronize:
+        dest: "{{ zuul.launcher.log_root }}/tox"
+        mode: pull
+        src: "/home/zuul/workspace/src/{{ zuul.project }}/.tox/pep8/log/"
+
+    - name: publish tox logs.
+      copy:
+        dest: "/opt/zuul-logs/{{ zuul.uuid}}"
+        src: "{{ zuul.launcher.log_root }}/"
+      delegate_to: 127.0.0.1
diff --git a/playbooks/pre.yaml b/playbooks/pre.yaml
new file mode 100644
index 0000000..1a2e699
--- /dev/null
+++ b/playbooks/pre.yaml
@@ -0,0 +1,3 @@
+- hosts: all
+  roles:
+    - prepare-workspace
diff --git a/playbooks/python-linters.yaml b/playbooks/python-linters.yaml
new file mode 100644
index 0000000..bc7effe
--- /dev/null
+++ b/playbooks/python-linters.yaml
@@ -0,0 +1,7 @@
+- hosts: all
+  tasks:
+    - name: Run a tox -e pep8.
+      include_role:
+        name: run-tox
+      vars:
+        run_tox_eventlist: pep8
diff --git a/playbooks/roles/prepare-workspace/defaults/main.yaml b/playbooks/roles/prepare-workspace/defaults/main.yaml
new file mode 100644
index 0000000..9127ad8
--- /dev/null
+++ b/playbooks/roles/prepare-workspace/defaults/main.yaml
@@ -0,0 +1,3 @@
+---
+# tasks/main.yaml
+prepare_workspace_root: /home/zuul/workspace
diff --git a/playbooks/roles/prepare-workspace/tasks/main.yaml b/playbooks/roles/prepare-workspace/tasks/main.yaml
new file mode 100644
index 0000000..76f9d95
--- /dev/null
+++ b/playbooks/roles/prepare-workspace/tasks/main.yaml
@@ -0,0 +1,21 @@
+- name: Ensure console.log does not exist.
+  file:
+    path: /tmp/console.log
+    state: absent
+
+- name: Start zuul_console daemon.
+  zuul_console:
+    path: /tmp/console.log
+    port: 19885
+
+- name: Create workspace directory.
+  file:
+    path: "{{ prepare_workspace_root }}"
+    owner: zuul
+    group: zuul
+    state: directory
+
+- name: Synchronize src repos to workspace directory.
+  synchronize:
+    dest: "{{ prepare_workspace_root }}"
+    src: "{{ zuul.launcher.src_root }}"
diff --git a/playbooks/roles/run-tox/defaults/main.yaml b/playbooks/roles/run-tox/defaults/main.yaml
new file mode 100644
index 0000000..7f0310c
--- /dev/null
+++ b/playbooks/roles/run-tox/defaults/main.yaml
@@ -0,0 +1,3 @@
+---
+# tasks/main.yaml
+run_tox_eventlist:
diff --git a/playbooks/roles/run-tox/tasks/main.yaml b/playbooks/roles/run-tox/tasks/main.yaml
new file mode 100644
index 0000000..ca8d079
--- /dev/null
+++ b/playbooks/roles/run-tox/tasks/main.yaml
@@ -0,0 +1,4 @@
+- name: Run tox
+  shell: "/usr/local/jenkins/slave_scripts/run-tox.sh {{ run_tox_eventlist }}"
+  args:
+    chdir: "/home/zuul/workspace/src/{{ zuul.project }}"
diff --git a/requirements.txt b/requirements.txt
index 872b8f0..84d84be 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -15,5 +15,7 @@
 PrettyTable>=0.6,<0.8
 babel>=1.0
 six>=1.6.0
+ansible>=2.0.0.1
+kazoo
 sqlalchemy
 alembic
diff --git a/setup.cfg b/setup.cfg
index 4967cd0..972f261 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -21,7 +21,7 @@
 
 [entry_points]
 console_scripts =
-    zuul-server = zuul.cmd.server:main
+    zuul-scheduler = zuul.cmd.scheduler:main
     zuul-merger = zuul.cmd.merger:main
     zuul = zuul.cmd.client:main
     zuul-cloner = zuul.cmd.cloner:main
diff --git a/test-requirements.txt b/test-requirements.txt
index 39ecf62..e43b7a1 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,4 +1,4 @@
-hacking>=0.9.2,<0.10
+hacking>=0.12.0,!=0.13.0,<0.14  # Apache-2.0
 
 coverage>=3.6
 sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
diff --git a/tests/base.py b/tests/base.py
index 9dc412b..7073305 100755
--- a/tests/base.py
+++ b/tests/base.py
@@ -1,6 +1,7 @@
 #!/usr/bin/env python
 
 # Copyright 2012 Hewlett-Packard Development Company, L.P.
+# Copyright 2016 Red Hat, Inc.
 #
 # Licensed under the Apache License, Version 2.0 (the "License"); you may
 # not use this file except in compliance with the License. You may obtain
@@ -20,7 +21,6 @@
 import json
 import logging
 import os
-import pprint
 from six.moves import queue as Queue
 from six.moves import urllib
 import random
@@ -28,10 +28,13 @@
 import select
 import shutil
 from six.moves import reload_module
+from six import StringIO
 import socket
 import string
 import subprocess
 import swiftclient
+import sys
+import tempfile
 import threading
 import time
 import uuid
@@ -40,37 +43,35 @@
 import git
 import gear
 import fixtures
+import kazoo.client
+import kazoo.exceptions
 import pymysql
 import statsd
 import testtools
-from git import GitCommandError
+import testtools.content
+import testtools.content_type
+from git.exc import NoSuchPathError
 
-import zuul.connection.gerrit
-import zuul.connection.smtp
+import zuul.driver.gerrit.gerritsource as gerritsource
+import zuul.driver.gerrit.gerritconnection as gerritconnection
 import zuul.connection.sql
 import zuul.scheduler
 import zuul.webapp
 import zuul.rpclistener
-import zuul.launcher.gearman
+import zuul.launcher.server
+import zuul.launcher.client
 import zuul.lib.swift
+import zuul.lib.connections
 import zuul.merger.client
 import zuul.merger.merger
 import zuul.merger.server
-import zuul.reporter.gerrit
-import zuul.reporter.smtp
-import zuul.source.gerrit
-import zuul.trigger.gerrit
-import zuul.trigger.timer
-import zuul.trigger.zuultrigger
+import zuul.nodepool
+import zuul.zk
 
 FIXTURE_DIR = os.path.join(os.path.dirname(__file__),
                            'fixtures')
 USE_TEMPDIR = True
 
-logging.basicConfig(level=logging.DEBUG,
-                    format='%(asctime)s %(name)-32s '
-                    '%(levelname)-8s %(message)s')
-
 
 def repack_repo(path):
     cmd = ['git', '--git-dir=%s/.git' % path, 'repack', '-afd']
@@ -103,12 +104,12 @@
 
 
 class FakeChange(object):
-    categories = {'APRV': ('Approved', -1, 1),
-                  'CRVW': ('Code-Review', -2, 2),
-                  'VRFY': ('Verified', -2, 2)}
+    categories = {'approved': ('Approved', -1, 1),
+                  'code-review': ('Code-Review', -2, 2),
+                  'verified': ('Verified', -2, 2)}
 
     def __init__(self, gerrit, number, project, branch, subject,
-                 status='NEW', upstream_root=None):
+                 status='NEW', upstream_root=None, files={}):
         self.gerrit = gerrit
         self.reported = 0
         self.queried = 0
@@ -142,11 +143,11 @@
             'url': 'https://hostname/%s' % number}
 
         self.upstream_root = upstream_root
-        self.addPatchset()
+        self.addPatchset(files=files)
         self.data['submitRecords'] = self.getSubmitRecords()
         self.open = status == 'NEW'
 
-    def add_fake_change_to_repo(self, msg, fn, large):
+    def addFakeChangeToRepo(self, msg, files, large):
         path = os.path.join(self.upstream_root, self.project)
         repo = git.Repo(path)
         ref = ChangeReference.create(repo, '1/%s/%s' % (self.number,
@@ -158,12 +159,11 @@
 
         path = os.path.join(self.upstream_root, self.project)
         if not large:
-            fn = os.path.join(path, fn)
-            f = open(fn, 'w')
-            f.write("test %s %s %s\n" %
-                    (self.branch, self.number, self.latest_patchset))
-            f.close()
-            repo.index.add([fn])
+            for fn, content in files.items():
+                fn = os.path.join(path, fn)
+                with open(fn, 'w') as f:
+                    f.write(content)
+                repo.index.add([fn])
         else:
             for fni in range(100):
                 fn = os.path.join(path, str(fni))
@@ -180,19 +180,20 @@
         repo.heads['master'].checkout()
         return r
 
-    def addPatchset(self, files=[], large=False):
+    def addPatchset(self, files=None, large=False):
         self.latest_patchset += 1
-        if files:
-            fn = files[0]
-        else:
+        if not files:
             fn = '%s-%s' % (self.branch.replace('/', '_'), self.number)
+            data = ("test %s %s %s\n" %
+                    (self.branch, self.number, self.latest_patchset))
+            files = {fn: data}
         msg = self.subject + '-' + str(self.latest_patchset)
-        c = self.add_fake_change_to_repo(msg, fn, large)
+        c = self.addFakeChangeToRepo(msg, files, large)
         ps_files = [{'file': '/COMMIT_MSG',
                      'type': 'ADDED'},
                     {'file': 'README',
                      'type': 'MODIFIED'}]
-        for f in files:
+        for f in files.keys():
             ps_files.append({'file': f, 'type': 'ADDED'})
         d = {'approvals': [],
              'createdOn': time.time(),
@@ -260,12 +261,22 @@
                             "url": "https://hostname/3"},
                  "patchSet": self.patchsets[patchset - 1],
                  "author": {"name": "User Name"},
-                 "approvals": [{"type": "Code-Review",
+                 "approvals": [{"type": "code-review",
                                 "description": "Code-Review",
                                 "value": "0"}],
                  "comment": "This is a comment"}
         return event
 
+    def getChangeMergedEvent(self):
+        event = {"submitter": {"name": "Jenkins",
+                               "username": "jenkins"},
+                 "newRev": "29ed3b5f8f750a225c5be70235230e3a6ccb04d9",
+                 "patchSet": self.patchsets[-1],
+                 "change": self.data,
+                 "type": "change-merged",
+                 "eventCreatedOn": 1487613810}
+        return event
+
     def getRefUpdatedEvent(self):
         path = os.path.join(self.upstream_root, self.project)
         repo = git.Repo(path)
@@ -404,26 +415,35 @@
         self.reported += 1
 
 
-class FakeGerritConnection(zuul.connection.gerrit.GerritConnection):
+class FakeGerritConnection(gerritconnection.GerritConnection):
+    """A Fake Gerrit connection for use in tests.
+
+    This subclasses
+    :py:class:`~zuul.connection.gerrit.GerritConnection` to add the
+    ability for tests to add changes to the fake Gerrit it represents.
+    """
+
     log = logging.getLogger("zuul.test.FakeGerritConnection")
 
-    def __init__(self, connection_name, connection_config,
-                 changes_db=None, queues_db=None, upstream_root=None):
-        super(FakeGerritConnection, self).__init__(connection_name,
+    def __init__(self, driver, connection_name, connection_config,
+                 changes_db=None, upstream_root=None):
+        super(FakeGerritConnection, self).__init__(driver, connection_name,
                                                    connection_config)
 
-        self.event_queue = queues_db
+        self.event_queue = Queue.Queue()
         self.fixture_dir = os.path.join(FIXTURE_DIR, 'gerrit')
         self.change_number = 0
         self.changes = changes_db
         self.queries = []
         self.upstream_root = upstream_root
 
-    def addFakeChange(self, project, branch, subject, status='NEW'):
+    def addFakeChange(self, project, branch, subject, status='NEW',
+                      files=None):
+        """Add a change to the fake Gerrit."""
         self.change_number += 1
         c = FakeChange(self, self.change_number, project, branch, subject,
                        upstream_root=self.upstream_root,
-                       status=status)
+                       status=status, files=files)
         self.changes[self.change_number] = c
         return c
 
@@ -441,10 +461,11 @@
         # happens they can add their own verified event into the queue.
         # Nevertheless, we can update change with the new review in gerrit.
 
-        for cat in ['CRVW', 'VRFY', 'APRV']:
-            if cat in action:
+        for cat in action.keys():
+            if cat != 'submit':
                 change.addApproval(cat, action[cat], username=self.user)
 
+        # TODOv3(jeblair): can this be removed?
         if 'label' in action:
             parts = action['label'].split('=')
             change.addApproval(parts[0], parts[2], username=self.user)
@@ -492,8 +513,8 @@
         self.__dict__.update(kw)
 
     def __repr__(self):
-        return ("<Completed build, result: %s name: %s #%s changes: %s>" %
-                (self.result, self.name, self.number, self.changes))
+        return ("<Completed build, result: %s name: %s uuid: %s changes: %s>" %
+                (self.result, self.name, self.uuid, self.changes))
 
 
 class FakeURLOpener(object):
@@ -547,28 +568,46 @@
         os.write(self.wake_write, '1\n')
 
 
-class FakeBuild(threading.Thread):
+class FakeBuild(object):
     log = logging.getLogger("zuul.test")
 
-    def __init__(self, worker, job, number, node):
-        threading.Thread.__init__(self)
+    def __init__(self, launch_server, job):
         self.daemon = True
-        self.worker = worker
+        self.launch_server = launch_server
         self.job = job
-        self.name = job.name.split(':')[1]
-        self.number = number
-        self.node = node
+        self.jobdir = None
+        self.uuid = job.unique
         self.parameters = json.loads(job.arguments)
+        # TODOv3(jeblair): self.node is really "the image of the node
+        # assigned".  We should rename it (self.node_image?) if we
+        # keep using it like this, or we may end up exposing more of
+        # the complexity around multi-node jobs here
+        # (self.nodes[0].image?)
+        self.node = None
+        if len(self.parameters.get('nodes')) == 1:
+            self.node = self.parameters['nodes'][0]['image']
         self.unique = self.parameters['ZUUL_UUID']
+        self.pipeline = self.parameters['ZUUL_PIPELINE']
+        self.project = self.parameters['ZUUL_PROJECT']
+        self.name = self.parameters['job']
         self.wait_condition = threading.Condition()
         self.waiting = False
         self.aborted = False
         self.requeue = False
         self.created = time.time()
-        self.description = ''
-        self.run_error = False
+        self.changes = None
+        if 'ZUUL_CHANGE_IDS' in self.parameters:
+            self.changes = self.parameters['ZUUL_CHANGE_IDS']
+
+    def __repr__(self):
+        waiting = ''
+        if self.waiting:
+            waiting = ' [waiting]'
+        return '<FakeBuild %s:%s %s%s>' % (self.pipeline, self.name,
+                                           self.changes, waiting)
 
     def release(self):
+        """Release this build."""
         self.wait_condition.acquire()
         self.wait_condition.notify()
         self.waiting = False
@@ -576,6 +615,12 @@
         self.wait_condition.release()
 
     def isWaiting(self):
+        """Return whether this build is being held.
+
+        :returns: Whether the build is being held.
+        :rtype: bool
+        """
+
         self.wait_condition.acquire()
         if self.waiting:
             ret = True
@@ -592,185 +637,190 @@
         self.wait_condition.release()
 
     def run(self):
-        data = {
-            'url': 'https://server/job/%s/%s/' % (self.name, self.number),
-            'name': self.name,
-            'number': self.number,
-            'manager': self.worker.worker_id,
-            'worker_name': 'My Worker',
-            'worker_hostname': 'localhost',
-            'worker_ips': ['127.0.0.1', '192.168.1.1'],
-            'worker_fqdn': 'zuul.example.org',
-            'worker_program': 'FakeBuilder',
-            'worker_version': 'v1.1',
-            'worker_extra': {'something': 'else'}
-        }
-
         self.log.debug('Running build %s' % self.unique)
 
-        self.job.sendWorkData(json.dumps(data))
-        self.log.debug('Sent WorkData packet with %s' % json.dumps(data))
-        self.job.sendWorkStatus(0, 100)
-
-        if self.worker.hold_jobs_in_build:
+        if self.launch_server.hold_jobs_in_build:
             self.log.debug('Holding build %s' % self.unique)
             self._wait()
         self.log.debug("Build %s continuing" % self.unique)
 
-        self.worker.lock.acquire()
-
-        result = 'SUCCESS'
-        if (('ZUUL_REF' in self.parameters) and
-            self.worker.shouldFailTest(self.name,
-                                       self.parameters['ZUUL_REF'])):
-            result = 'FAILURE'
+        result = (RecordingAnsibleJob.RESULT_NORMAL, 0)  # Success
+        if (('ZUUL_REF' in self.parameters) and self.shouldFail()):
+            result = (RecordingAnsibleJob.RESULT_NORMAL, 1)  # Failure
         if self.aborted:
-            result = 'ABORTED'
+            result = (RecordingAnsibleJob.RESULT_ABORTED, None)
         if self.requeue:
-            result = None
+            result = (RecordingAnsibleJob.RESULT_UNREACHABLE, None)
 
-        if self.run_error:
-            work_fail = True
-            result = 'RUN_ERROR'
-        else:
-            data['result'] = result
-            data['node_labels'] = ['bare-necessities']
-            data['node_name'] = 'foo'
-            work_fail = False
+        return result
 
-        changes = None
-        if 'ZUUL_CHANGE_IDS' in self.parameters:
-            changes = self.parameters['ZUUL_CHANGE_IDS']
+    def shouldFail(self):
+        changes = self.launch_server.fail_tests.get(self.name, [])
+        for change in changes:
+            if self.hasChanges(change):
+                return True
+        return False
 
-        self.worker.build_history.append(
-            BuildHistory(name=self.name, number=self.number,
-                         result=result, changes=changes, node=self.node,
-                         uuid=self.unique, description=self.description,
-                         parameters=self.parameters,
-                         pipeline=self.parameters['ZUUL_PIPELINE'])
-        )
+    def hasChanges(self, *changes):
+        """Return whether this build has certain changes in its git repos.
 
-        self.job.sendWorkData(json.dumps(data))
-        if work_fail:
-            self.job.sendWorkFail()
-        else:
-            self.job.sendWorkComplete(json.dumps(data))
-        del self.worker.gearman_jobs[self.job.unique]
-        self.worker.running_builds.remove(self)
-        self.worker.lock.release()
+        :arg FakeChange changes: One or more changes (varargs) that
+        are expected to be present (in order) in the git repository of
+        the active project.
+
+        :returns: Whether the build has the indicated changes.
+        :rtype: bool
+
+        """
+        for change in changes:
+            path = os.path.join(self.jobdir.src_root, change.project)
+            try:
+                repo = git.Repo(path)
+            except NoSuchPathError as e:
+                self.log.debug('%s' % e)
+                return False
+            ref = self.parameters['ZUUL_REF']
+            repo_messages = [c.message.strip() for c in repo.iter_commits(ref)]
+            commit_message = '%s-1' % change.subject
+            self.log.debug("Checking if build %s has changes; commit_message "
+                           "%s; repo_messages %s" % (self, commit_message,
+                                                     repo_messages))
+            if commit_message not in repo_messages:
+                self.log.debug("  messages do not match")
+                return False
+        self.log.debug("  OK")
+        return True
 
 
-class FakeWorker(gear.Worker):
-    def __init__(self, worker_id, test):
-        super(FakeWorker, self).__init__(worker_id)
-        self.gearman_jobs = {}
-        self.build_history = []
-        self.running_builds = []
-        self.build_counter = 0
-        self.fail_tests = {}
-        self.test = test
+class RecordingLaunchServer(zuul.launcher.server.LaunchServer):
+    """An Ansible launcher to be used in tests.
 
+    :ivar bool hold_jobs_in_build: If true, when jobs are launched
+        they will report that they have started but then pause until
+        released before reporting completion.  This attribute may be
+        changed at any time and will take effect for subsequently
+        launched builds, but previously held builds will still need to
+        be explicitly released.
+
+    """
+    def __init__(self, *args, **kw):
+        self._run_ansible = kw.pop('_run_ansible', False)
+        self._test_root = kw.pop('_test_root', False)
+        super(RecordingLaunchServer, self).__init__(*args, **kw)
         self.hold_jobs_in_build = False
         self.lock = threading.Lock()
-        self.__work_thread = threading.Thread(target=self.work)
-        self.__work_thread.daemon = True
-        self.__work_thread.start()
+        self.running_builds = []
+        self.build_history = []
+        self.fail_tests = {}
+        self.job_builds = {}
 
-    def handleJob(self, job):
-        parts = job.name.split(":")
-        cmd = parts[0]
-        name = parts[1]
-        if len(parts) > 2:
-            node = parts[2]
-        else:
-            node = None
-        if cmd == 'build':
-            self.handleBuild(job, name, node)
-        elif cmd == 'stop':
-            self.handleStop(job, name)
-        elif cmd == 'set_description':
-            self.handleSetDescription(job, name)
+    def failJob(self, name, change):
+        """Instruct the launcher to report matching builds as failures.
 
-    def handleBuild(self, job, name, node):
-        build = FakeBuild(self, job, self.build_counter, node)
-        job.build = build
-        self.gearman_jobs[job.unique] = job
-        self.build_counter += 1
+        :arg str name: The name of the job to fail.
+        :arg Change change: The :py:class:`~tests.base.FakeChange`
+            instance which should cause the job to fail.  This job
+            will also fail for changes depending on this change.
 
-        self.running_builds.append(build)
-        build.start()
-
-    def handleStop(self, job, name):
-        self.log.debug("handle stop")
-        parameters = json.loads(job.arguments)
-        name = parameters['name']
-        number = parameters['number']
-        for build in self.running_builds:
-            if build.name == name and build.number == number:
-                build.aborted = True
-                build.release()
-                job.sendWorkComplete()
-                return
-        job.sendWorkFail()
-
-    def handleSetDescription(self, job, name):
-        self.log.debug("handle set description")
-        parameters = json.loads(job.arguments)
-        name = parameters['name']
-        number = parameters['number']
-        descr = parameters['html_description']
-        for build in self.running_builds:
-            if build.name == name and build.number == number:
-                build.description = descr
-                job.sendWorkComplete()
-                return
-        for build in self.build_history:
-            if build.name == name and build.number == number:
-                build.description = descr
-                job.sendWorkComplete()
-                return
-        job.sendWorkFail()
-
-    def work(self):
-        while self.running:
-            try:
-                job = self.getJob()
-            except gear.InterruptedError:
-                continue
-            try:
-                self.handleJob(job)
-            except:
-                self.log.exception("Worker exception:")
-
-    def addFailTest(self, name, change):
+        """
         l = self.fail_tests.get(name, [])
         l.append(change)
         self.fail_tests[name] = l
 
-    def shouldFailTest(self, name, ref):
-        l = self.fail_tests.get(name, [])
-        for change in l:
-            if self.test.ref_has_change(ref, change):
-                return True
-        return False
-
     def release(self, regex=None):
+        """Release a held build.
+
+        :arg str regex: A regular expression which, if supplied, will
+            cause only builds with matching names to be released.  If
+            not supplied, all builds will be released.
+
+        """
         builds = self.running_builds[:]
-        self.log.debug("releasing build %s (%s)" % (regex,
+        self.log.debug("Releasing build %s (%s)" % (regex,
                                                     len(self.running_builds)))
         for build in builds:
             if not regex or re.match(regex, build.name):
-                self.log.debug("releasing build %s" %
+                self.log.debug("Releasing build %s" %
                                (build.parameters['ZUUL_UUID']))
                 build.release()
             else:
-                self.log.debug("not releasing build %s" %
+                self.log.debug("Not releasing build %s" %
                                (build.parameters['ZUUL_UUID']))
-        self.log.debug("done releasing builds %s (%s)" %
+        self.log.debug("Done releasing builds %s (%s)" %
                        (regex, len(self.running_builds)))
 
+    def launchJob(self, job):
+        build = FakeBuild(self, job)
+        job.build = build
+        self.running_builds.append(build)
+        self.job_builds[job.unique] = build
+        args = json.loads(job.arguments)
+        args['vars']['zuul']['_test'] = dict(test_root=self._test_root)
+        job.arguments = json.dumps(args)
+        self.job_workers[job.unique] = RecordingAnsibleJob(self, job)
+        self.job_workers[job.unique].run()
+
+    def stopJob(self, job):
+        self.log.debug("handle stop")
+        parameters = json.loads(job.arguments)
+        uuid = parameters['uuid']
+        for build in self.running_builds:
+            if build.unique == uuid:
+                build.aborted = True
+                build.release()
+        super(RecordingLaunchServer, self).stopJob(job)
+
+
+class RecordingAnsibleJob(zuul.launcher.server.AnsibleJob):
+    def runPlaybooks(self, args):
+        build = self.launcher_server.job_builds[self.job.unique]
+        build.jobdir = self.jobdir
+
+        result = super(RecordingAnsibleJob, self).runPlaybooks(args)
+
+        self.launcher_server.lock.acquire()
+        self.launcher_server.build_history.append(
+            BuildHistory(name=build.name, result=result, changes=build.changes,
+                         node=build.node, uuid=build.unique,
+                         parameters=build.parameters, jobdir=build.jobdir,
+                         pipeline=build.parameters['ZUUL_PIPELINE'])
+        )
+        self.launcher_server.running_builds.remove(build)
+        del self.launcher_server.job_builds[self.job.unique]
+        self.launcher_server.lock.release()
+        return result
+
+    def runAnsible(self, cmd, timeout, trusted=False):
+        build = self.launcher_server.job_builds[self.job.unique]
+
+        if self.launcher_server._run_ansible:
+            result = super(RecordingAnsibleJob, self).runAnsible(
+                cmd, timeout, trusted=trusted)
+        else:
+            result = build.run()
+        return result
+
+    def getHostList(self, args):
+        self.log.debug("hostlist")
+        hosts = super(RecordingAnsibleJob, self).getHostList(args)
+        for name, d in hosts:
+            d['ansible_connection'] = 'local'
+        hosts.append(('localhost', dict(ansible_connection='local')))
+        return hosts
+
 
 class FakeGearmanServer(gear.Server):
+    """A Gearman server for use in tests.
+
+    :ivar bool hold_jobs_in_queue: If true, submitted jobs will be
+        added to the queue but will not be distributed to workers
+        until released.  This attribute may be changed at any time and
+        will take effect for subsequently enqueued jobs, but
+        previously held jobs will still need to be explicitly
+        released.
+
+    """
+
     def __init__(self):
         self.hold_jobs_in_queue = False
         super(FakeGearmanServer, self).__init__(0)
@@ -779,7 +829,7 @@
         for queue in [self.high_queue, self.normal_queue, self.low_queue]:
             for job in queue:
                 if not hasattr(job, 'waiting'):
-                    if job.name.startswith('build:'):
+                    if job.name.startswith('launcher:launch'):
                         job.waiting = self.hold_jobs_in_queue
                     else:
                         job.waiting = False
@@ -795,15 +845,21 @@
         return None
 
     def release(self, regex=None):
+        """Release a held job.
+
+        :arg str regex: A regular expression which, if supplied, will
+            cause only jobs with matching names to be released.  If
+            not supplied, all jobs will be released.
+        """
         released = False
         qlen = (len(self.high_queue) + len(self.normal_queue) +
                 len(self.low_queue))
         self.log.debug("releasing queued job %s (%s)" % (regex, qlen))
         for job in self.getQueue():
-            cmd, name = job.name.split(':')
-            if cmd != 'build':
+            if job.name != 'launcher:launch':
                 continue
-            if not regex or re.match(regex, name):
+            parameters = json.loads(job.arguments)
+            if not regex or re.match(regex, parameters.get('job')):
                 self.log.debug("releasing queued job %s" %
                                job.unique)
                 job.waiting = False
@@ -859,6 +915,181 @@
         return endpoint, ''
 
 
+class FakeNodepool(object):
+    REQUEST_ROOT = '/nodepool/requests'
+    NODE_ROOT = '/nodepool/nodes'
+
+    log = logging.getLogger("zuul.test.FakeNodepool")
+
+    def __init__(self, host, port, chroot):
+        self.client = kazoo.client.KazooClient(
+            hosts='%s:%s%s' % (host, port, chroot))
+        self.client.start()
+        self._running = True
+        self.paused = False
+        self.thread = threading.Thread(target=self.run)
+        self.thread.daemon = True
+        self.thread.start()
+        self.fail_requests = set()
+
+    def stop(self):
+        self._running = False
+        self.thread.join()
+        self.client.stop()
+        self.client.close()
+
+    def run(self):
+        while self._running:
+            self._run()
+            time.sleep(0.1)
+
+    def _run(self):
+        if self.paused:
+            return
+        for req in self.getNodeRequests():
+            self.fulfillRequest(req)
+
+    def getNodeRequests(self):
+        try:
+            reqids = self.client.get_children(self.REQUEST_ROOT)
+        except kazoo.exceptions.NoNodeError:
+            return []
+        reqs = []
+        for oid in sorted(reqids):
+            path = self.REQUEST_ROOT + '/' + oid
+            try:
+                data, stat = self.client.get(path)
+                data = json.loads(data)
+                data['_oid'] = oid
+                reqs.append(data)
+            except kazoo.exceptions.NoNodeError:
+                pass
+        return reqs
+
+    def getNodes(self):
+        try:
+            nodeids = self.client.get_children(self.NODE_ROOT)
+        except kazoo.exceptions.NoNodeError:
+            return []
+        nodes = []
+        for oid in sorted(nodeids):
+            path = self.NODE_ROOT + '/' + oid
+            data, stat = self.client.get(path)
+            data = json.loads(data)
+            data['_oid'] = oid
+            try:
+                lockfiles = self.client.get_children(path + '/lock')
+            except kazoo.exceptions.NoNodeError:
+                lockfiles = []
+            if lockfiles:
+                data['_lock'] = True
+            else:
+                data['_lock'] = False
+            nodes.append(data)
+        return nodes
+
+    def makeNode(self, request_id, node_type):
+        now = time.time()
+        path = '/nodepool/nodes/'
+        data = dict(type=node_type,
+                    provider='test-provider',
+                    region='test-region',
+                    az=None,
+                    public_ipv4='127.0.0.1',
+                    private_ipv4=None,
+                    public_ipv6=None,
+                    allocated_to=request_id,
+                    state='ready',
+                    state_time=now,
+                    created_time=now,
+                    updated_time=now,
+                    image_id=None,
+                    launcher='fake-nodepool')
+        data = json.dumps(data)
+        path = self.client.create(path, data,
+                                  makepath=True,
+                                  sequence=True)
+        nodeid = path.split("/")[-1]
+        return nodeid
+
+    def addFailRequest(self, request):
+        self.fail_requests.add(request['_oid'])
+
+    def fulfillRequest(self, request):
+        if request['state'] != 'requested':
+            return
+        request = request.copy()
+        oid = request['_oid']
+        del request['_oid']
+
+        if oid in self.fail_requests:
+            request['state'] = 'failed'
+        else:
+            request['state'] = 'fulfilled'
+            nodes = []
+            for node in request['node_types']:
+                nodeid = self.makeNode(oid, node)
+                nodes.append(nodeid)
+            request['nodes'] = nodes
+
+        request['state_time'] = time.time()
+        path = self.REQUEST_ROOT + '/' + oid
+        data = json.dumps(request)
+        self.log.debug("Fulfilling node request: %s %s" % (oid, data))
+        self.client.set(path, data)
+
+
+class ChrootedKazooFixture(fixtures.Fixture):
+    def __init__(self):
+        super(ChrootedKazooFixture, self).__init__()
+
+        zk_host = os.environ.get('NODEPOOL_ZK_HOST', 'localhost')
+        if ':' in zk_host:
+            host, port = zk_host.split(':')
+        else:
+            host = zk_host
+            port = None
+
+        self.zookeeper_host = host
+
+        if not port:
+            self.zookeeper_port = 2181
+        else:
+            self.zookeeper_port = int(port)
+
+    def _setUp(self):
+        # Make sure the test chroot paths do not conflict
+        random_bits = ''.join(random.choice(string.ascii_lowercase +
+                                            string.ascii_uppercase)
+                              for x in range(8))
+
+        rand_test_path = '%s_%s' % (random_bits, os.getpid())
+        self.zookeeper_chroot = "/nodepool_test/%s" % rand_test_path
+
+        # Ensure the chroot path exists and clean up any pre-existing znodes.
+        _tmp_client = kazoo.client.KazooClient(
+            hosts='%s:%s' % (self.zookeeper_host, self.zookeeper_port))
+        _tmp_client.start()
+
+        if _tmp_client.exists(self.zookeeper_chroot):
+            _tmp_client.delete(self.zookeeper_chroot, recursive=True)
+
+        _tmp_client.ensure_path(self.zookeeper_chroot)
+        _tmp_client.stop()
+        _tmp_client.close()
+
+        self.addCleanup(self._cleanup)
+
+    def _cleanup(self):
+        '''Remove the chroot path.'''
+        # Need a non-chroot'ed client to remove the chroot path
+        _tmp_client = kazoo.client.KazooClient(
+            hosts='%s:%s' % (self.zookeeper_host, self.zookeeper_port))
+        _tmp_client.start()
+        _tmp_client.delete(self.zookeeper_chroot, recursive=True)
+        _tmp_client.stop()
+
+
 class MySQLSchemaFixture(fixtures.Fixture):
     def setUp(self):
         super(MySQLSchemaFixture, self).setUp()
@@ -898,6 +1129,21 @@
 
 class BaseTestCase(testtools.TestCase):
     log = logging.getLogger("zuul.test")
+    wait_timeout = 20
+
+    def attachLogs(self, *args):
+        def reader():
+            self._log_stream.seek(0)
+            while True:
+                x = self._log_stream.read(4096)
+                if not x:
+                    break
+                yield x.encode('utf8')
+        content = testtools.content.content_from_reader(
+            reader,
+            testtools.content_type.UTF8_TEXT,
+            False)
+        self.addDetail('logging', content)
 
     def setUp(self):
         super(BaseTestCase, self).setUp()
@@ -920,49 +1166,104 @@
             self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr))
         if (os.environ.get('OS_LOG_CAPTURE') == 'True' or
             os.environ.get('OS_LOG_CAPTURE') == '1'):
-            log_level = logging.DEBUG
-            if os.environ.get('OS_LOG_LEVEL') == 'DEBUG':
-                log_level = logging.DEBUG
-            elif os.environ.get('OS_LOG_LEVEL') == 'INFO':
-                log_level = logging.INFO
-            elif os.environ.get('OS_LOG_LEVEL') == 'WARNING':
-                log_level = logging.WARNING
-            elif os.environ.get('OS_LOG_LEVEL') == 'ERROR':
-                log_level = logging.ERROR
-            elif os.environ.get('OS_LOG_LEVEL') == 'CRITICAL':
-                log_level = logging.CRITICAL
-            self.useFixture(fixtures.FakeLogger(
-                level=log_level,
-                format='%(asctime)s %(name)-32s '
-                '%(levelname)-8s %(message)s'))
+            self._log_stream = StringIO()
+            self.addOnException(self.attachLogs)
+        else:
+            self._log_stream = sys.stdout
 
-            # NOTE(notmorgan): Extract logging overrides for specific libraries
-            # from the OS_LOG_DEFAULTS env and create FakeLogger fixtures for
-            # each. This is used to limit the output during test runs from
-            # libraries that zuul depends on such as gear.
-            log_defaults_from_env = os.environ.get('OS_LOG_DEFAULTS')
+        handler = logging.StreamHandler(self._log_stream)
+        formatter = logging.Formatter('%(asctime)s %(name)-32s '
+                                      '%(levelname)-8s %(message)s')
+        handler.setFormatter(formatter)
 
-            if log_defaults_from_env:
-                for default in log_defaults_from_env.split(','):
-                    try:
-                        name, level_str = default.split('=', 1)
-                        level = getattr(logging, level_str, logging.DEBUG)
-                        self.useFixture(fixtures.FakeLogger(
-                            name=name,
-                            level=level,
-                            format='%(asctime)s %(name)-32s '
-                                   '%(levelname)-8s %(message)s'))
-                    except ValueError:
-                        # NOTE(notmorgan): Invalid format of the log default,
-                        # skip and don't try and apply a logger for the
-                        # specified module
-                        pass
+        logger = logging.getLogger()
+        logger.setLevel(logging.DEBUG)
+        logger.addHandler(handler)
+
+        # NOTE(notmorgan): Extract logging overrides for specific
+        # libraries from the OS_LOG_DEFAULTS env and create loggers
+        # for each. This is used to limit the output during test runs
+        # from libraries that zuul depends on such as gear.
+        log_defaults_from_env = os.environ.get(
+            'OS_LOG_DEFAULTS',
+            'git.cmd=INFO,kazoo.client=WARNING,gear=INFO')
+
+        if log_defaults_from_env:
+            for default in log_defaults_from_env.split(','):
+                try:
+                    name, level_str = default.split('=', 1)
+                    level = getattr(logging, level_str, logging.DEBUG)
+                    logger = logging.getLogger(name)
+                    logger.setLevel(level)
+                    logger.addHandler(handler)
+                    logger.propagate = False
+                except ValueError:
+                    # NOTE(notmorgan): Invalid format of the log default,
+                    # skip and don't try and apply a logger for the
+                    # specified module
+                    pass
 
 
 class ZuulTestCase(BaseTestCase):
+    """A test case with a functioning Zuul.
+
+    The following class variables are used during test setup and can
+    be overidden by subclasses but are effectively read-only once a
+    test method starts running:
+
+    :cvar str config_file: This points to the main zuul config file
+        within the fixtures directory.  Subclasses may override this
+        to obtain a different behavior.
+
+    :cvar str tenant_config_file: This is the tenant config file
+        (which specifies from what git repos the configuration should
+        be loaded).  It defaults to the value specified in
+        `config_file` but can be overidden by subclasses to obtain a
+        different tenant/project layout while using the standard main
+        configuration.
+
+    The following are instance variables that are useful within test
+    methods:
+
+    :ivar FakeGerritConnection fake_<connection>:
+        A :py:class:`~tests.base.FakeGerritConnection` will be
+        instantiated for each connection present in the config file
+        and stored here.  For instance, `fake_gerrit` will hold the
+        FakeGerritConnection object for a connection named `gerrit`.
+
+    :ivar FakeGearmanServer gearman_server: An instance of
+        :py:class:`~tests.base.FakeGearmanServer` which is the Gearman
+        server that all of the Zuul components in this test use to
+        communicate with each other.
+
+    :ivar RecordingLaunchServer launch_server: An instance of
+        :py:class:`~tests.base.RecordingLaunchServer` which is the
+        Ansible launch server used to run jobs for this test.
+
+    :ivar list builds: A list of :py:class:`~tests.base.FakeBuild` objects
+        representing currently running builds.  They are appended to
+        the list in the order they are launched, and removed from this
+        list upon completion.
+
+    :ivar list history: A list of :py:class:`~tests.base.BuildHistory`
+        objects representing completed builds.  They are appended to
+        the list in the order they complete.
+
+    """
+
+    config_file = 'zuul.conf'
+    run_ansible = False
+
+    def _startMerger(self):
+        self.merge_server = zuul.merger.server.MergeServer(self.config,
+                                                           self.connections)
+        self.merge_server.start()
 
     def setUp(self):
         super(ZuulTestCase, self).setUp()
+
+        self.setupZK()
+
         if USE_TEMPDIR:
             tmp_root = self.useFixture(fixtures.TempDir(
                 rootdir=os.environ.get("ZUUL_TEST_ROOT"))
@@ -971,7 +1272,8 @@
             tmp_root = os.environ.get("ZUUL_TEST_ROOT")
         self.test_root = os.path.join(tmp_root, "zuul-test")
         self.upstream_root = os.path.join(self.test_root, "upstream")
-        self.git_root = os.path.join(self.test_root, "git")
+        self.merger_src_root = os.path.join(self.test_root, "merger-git")
+        self.launcher_src_root = os.path.join(self.test_root, "launcher-git")
         self.state_root = os.path.join(self.test_root, "lib")
 
         if os.path.exists(self.test_root):
@@ -982,16 +1284,16 @@
 
         # Make per test copy of Configuration.
         self.setup_config()
-        self.config.set('zuul', 'layout_config',
+        self.config.set('zuul', 'tenant_config',
                         os.path.join(FIXTURE_DIR,
-                                     self.config.get('zuul', 'layout_config')))
-        self.config.set('merger', 'git_dir', self.git_root)
+                                     self.config.get('zuul', 'tenant_config')))
+        self.config.set('merger', 'git_dir', self.merger_src_root)
+        self.config.set('launcher', 'git_dir', self.launcher_src_root)
         self.config.set('zuul', 'state_dir', self.state_root)
 
         # For each project in config:
-        self.init_repo("org/project")
-        self.init_repo("org/project1")
-        self.init_repo("org/project2")
+        # TODOv3(jeblair): remove these and replace with new git
+        # filesystem fixtures
         self.init_repo("org/project3")
         self.init_repo("org/project4")
         self.init_repo("org/project5")
@@ -1019,24 +1321,24 @@
         self.gearman_server = FakeGearmanServer()
 
         self.config.set('gearman', 'port', str(self.gearman_server.port))
+        self.log.info("Gearman server on port %s" %
+                      (self.gearman_server.port,))
 
-        self.worker = FakeWorker('fake_worker', self)
-        self.worker.addServer('127.0.0.1', self.gearman_server.port)
-        self.gearman_server.worker = self.worker
-
-        zuul.source.gerrit.GerritSource.replication_timeout = 1.5
-        zuul.source.gerrit.GerritSource.replication_retry_interval = 0.5
-        zuul.connection.gerrit.GerritEventConnector.delay = 0.0
+        gerritsource.GerritSource.replication_timeout = 1.5
+        gerritsource.GerritSource.replication_retry_interval = 0.5
+        gerritconnection.GerritEventConnector.delay = 0.0
 
         self.sched = zuul.scheduler.Scheduler(self.config)
 
         self.useFixture(fixtures.MonkeyPatch('swiftclient.client.Connection',
                                              FakeSwiftClientConnection))
+
         self.swift = zuul.lib.swift.Swift(self.config)
 
         self.event_queues = [
             self.sched.result_event_queue,
-            self.sched.trigger_event_queue
+            self.sched.trigger_event_queue,
+            self.sched.management_event_queue
         ]
 
         self.configure_connections()
@@ -1050,17 +1352,34 @@
         old_urlopen = urllib.request.urlopen
         urllib.request.urlopen = URLOpenerFactory
 
-        self.merge_server = zuul.merger.server.MergeServer(self.config,
-                                                           self.connections)
-        self.merge_server.start()
+        self._startMerger()
 
-        self.launcher = zuul.launcher.gearman.Gearman(self.config, self.sched,
-                                                      self.swift)
+        self.launch_server = RecordingLaunchServer(
+            self.config, self.connections,
+            jobdir_root=self.test_root,
+            _run_ansible=self.run_ansible,
+            _test_root=self.test_root)
+        self.launch_server.start()
+        self.history = self.launch_server.build_history
+        self.builds = self.launch_server.running_builds
+
+        self.launch_client = zuul.launcher.client.LaunchClient(
+            self.config, self.sched, self.swift)
         self.merge_client = zuul.merger.client.MergeClient(
             self.config, self.sched)
+        self.nodepool = zuul.nodepool.Nodepool(self.sched)
+        self.zk = zuul.zk.ZooKeeper()
+        self.zk.connect(self.zk_config)
 
-        self.sched.setLauncher(self.launcher)
+        self.fake_nodepool = FakeNodepool(
+            self.zk_chroot_fixture.zookeeper_host,
+            self.zk_chroot_fixture.zookeeper_port,
+            self.zk_chroot_fixture.zookeeper_chroot)
+
+        self.sched.setLauncher(self.launch_client)
         self.sched.setMerger(self.merge_client)
+        self.sched.setNodepool(self.nodepool)
+        self.sched.setZooKeeper(self.zk)
 
         self.webapp = zuul.webapp.WebApp(
             self.sched, port=0, listen_address='127.0.0.1')
@@ -1071,15 +1390,34 @@
         self.sched.resume()
         self.webapp.start()
         self.rpc.start()
-        self.launcher.gearman.waitForServer()
-        self.registerJobs()
-        self.builds = self.worker.running_builds
-        self.history = self.worker.build_history
+        self.launch_client.gearman.waitForServer()
 
-        self.addCleanup(self.assertFinalState)
         self.addCleanup(self.shutdown)
 
+    def tearDown(self):
+        super(ZuulTestCase, self).tearDown()
+        self.assertFinalState()
+
     def configure_connections(self):
+        # Set up gerrit related fakes
+        # Set a changes database so multiple FakeGerrit's can report back to
+        # a virtual canonical database given by the configured hostname
+        self.gerrit_changes_dbs = {}
+
+        def getGerritConnection(driver, name, config):
+            db = self.gerrit_changes_dbs.setdefault(config['server'], {})
+            con = FakeGerritConnection(driver, name, config,
+                                       changes_db=db,
+                                       upstream_root=self.upstream_root)
+            self.event_queues.append(con.event_queue)
+            setattr(self, 'fake_' + name, con)
+            return con
+
+        self.useFixture(fixtures.MonkeyPatch(
+            'zuul.driver.gerrit.GerritDriver.getConnection',
+            getGerritConnection))
+
+        # Set up smtp related fakes
         # TODO(jhesketh): This should come from lib.connections for better
         # coverage
         # Register connections from the config
@@ -1091,73 +1429,61 @@
 
         self.useFixture(fixtures.MonkeyPatch('smtplib.SMTP', FakeSMTPFactory))
 
-        # Set a changes database so multiple FakeGerrit's can report back to
-        # a virtual canonical database given by the configured hostname
-        self.gerrit_changes_dbs = {}
-        self.gerrit_queues_dbs = {}
-        self.connections = {}
+        # Register connections from the config using fakes
+        self.connections = zuul.lib.connections.ConnectionRegistry()
+        self.connections.configure(self.config)
 
-        for section_name in self.config.sections():
-            con_match = re.match(r'^connection ([\'\"]?)(.*)(\1)$',
-                                 section_name, re.I)
-            if not con_match:
-                continue
-            con_name = con_match.group(2)
-            con_config = dict(self.config.items(section_name))
-
-            if 'driver' not in con_config:
-                raise Exception("No driver specified for connection %s."
-                                % con_name)
-
-            con_driver = con_config['driver']
-
-            # TODO(jhesketh): load the required class automatically
-            if con_driver == 'gerrit':
-                if con_config['server'] not in self.gerrit_changes_dbs.keys():
-                    self.gerrit_changes_dbs[con_config['server']] = {}
-                if con_config['server'] not in self.gerrit_queues_dbs.keys():
-                    self.gerrit_queues_dbs[con_config['server']] = \
-                        Queue.Queue()
-                    self.event_queues.append(
-                        self.gerrit_queues_dbs[con_config['server']])
-                self.connections[con_name] = FakeGerritConnection(
-                    con_name, con_config,
-                    changes_db=self.gerrit_changes_dbs[con_config['server']],
-                    queues_db=self.gerrit_queues_dbs[con_config['server']],
-                    upstream_root=self.upstream_root
-                )
-                setattr(self, 'fake_' + con_name, self.connections[con_name])
-            elif con_driver == 'smtp':
-                self.connections[con_name] = \
-                    zuul.connection.smtp.SMTPConnection(con_name, con_config)
-            elif con_driver == 'sql':
-                self.connections[con_name] = \
-                    zuul.connection.sql.SQLConnection(con_name, con_config)
-            else:
-                raise Exception("Unknown driver, %s, for connection %s"
-                                % (con_config['driver'], con_name))
-
-        # If the [gerrit] or [smtp] sections still exist, load them in as a
-        # connection named 'gerrit' or 'smtp' respectfully
-
-        if 'gerrit' in self.config.sections():
-            self.gerrit_changes_dbs['gerrit'] = {}
-            self.gerrit_queues_dbs['gerrit'] = Queue.Queue()
-            self.event_queues.append(self.gerrit_queues_dbs['gerrit'])
-            self.connections['gerrit'] = FakeGerritConnection(
-                '_legacy_gerrit', dict(self.config.items('gerrit')),
-                changes_db=self.gerrit_changes_dbs['gerrit'],
-                queues_db=self.gerrit_queues_dbs['gerrit'])
-
-        if 'smtp' in self.config.sections():
-            self.connections['smtp'] = \
-                zuul.connection.smtp.SMTPConnection(
-                    '_legacy_smtp', dict(self.config.items('smtp')))
-
-    def setup_config(self, config_file='zuul.conf'):
-        """Per test config object. Override to set different config."""
+    def setup_config(self):
+        # This creates the per-test configuration object.  It can be
+        # overriden by subclasses, but should not need to be since it
+        # obeys the config_file and tenant_config_file attributes.
         self.config = ConfigParser.ConfigParser()
-        self.config.read(os.path.join(FIXTURE_DIR, config_file))
+        self.config.read(os.path.join(FIXTURE_DIR, self.config_file))
+        if hasattr(self, 'tenant_config_file'):
+            self.config.set('zuul', 'tenant_config', self.tenant_config_file)
+            git_path = os.path.join(
+                os.path.dirname(
+                    os.path.join(FIXTURE_DIR, self.tenant_config_file)),
+                'git')
+            if os.path.exists(git_path):
+                for reponame in os.listdir(git_path):
+                    project = reponame.replace('_', '/')
+                    self.copyDirToRepo(project,
+                                       os.path.join(git_path, reponame))
+
+    def setupZK(self):
+        self.zk_chroot_fixture = self.useFixture(ChrootedKazooFixture())
+        self.zk_config = '%s:%s%s' % (
+            self.zk_chroot_fixture.zookeeper_host,
+            self.zk_chroot_fixture.zookeeper_port,
+            self.zk_chroot_fixture.zookeeper_chroot)
+
+    def copyDirToRepo(self, project, source_path):
+        self.init_repo(project)
+
+        files = {}
+        for (dirpath, dirnames, filenames) in os.walk(source_path):
+            for filename in filenames:
+                test_tree_filepath = os.path.join(dirpath, filename)
+                common_path = os.path.commonprefix([test_tree_filepath,
+                                                    source_path])
+                relative_filepath = test_tree_filepath[len(common_path) + 1:]
+                with open(test_tree_filepath, 'r') as f:
+                    content = f.read()
+                files[relative_filepath] = content
+        self.addCommitToRepo(project, 'add content from fixture',
+                             files, branch='master', tag='init')
+
+    def assertNodepoolState(self):
+        # Make sure that there are no pending requests
+
+        requests = self.fake_nodepool.getNodeRequests()
+        self.assertEqual(len(requests), 0)
+
+        nodes = self.fake_nodepool.getNodes()
+        for node in nodes:
+            self.assertFalse(node['_lock'], "Node %s is locked" %
+                             (node['_oid'],))
 
     def assertFinalState(self):
         # Make sure that git.Repo objects have been garbage collected.
@@ -1165,21 +1491,26 @@
         gc.collect()
         for obj in gc.get_objects():
             if isinstance(obj, git.Repo):
+                self.log.debug("Leaked git repo object: %s" % repr(obj))
+                for r in gc.get_referrers(obj):
+                    self.log.debug("  referrer: %s" % repr(r))
                 repos.append(obj)
         self.assertEqual(len(repos), 0)
         self.assertEmptyQueues()
-        for pipeline in self.sched.layout.pipelines.values():
-            if isinstance(pipeline.manager,
-                          zuul.scheduler.IndependentPipelineManager):
-                self.assertEqual(len(pipeline.queues), 0)
+        self.assertNodepoolState()
+        ipm = zuul.manager.independent.IndependentPipelineManager
+        for tenant in self.sched.abide.tenants.values():
+            for pipeline in tenant.layout.pipelines.values():
+                if isinstance(pipeline.manager, ipm):
+                    self.assertEqual(len(pipeline.queues), 0)
 
     def shutdown(self):
         self.log.debug("Shutting down after tests")
-        self.launcher.stop()
+        self.launch_client.stop()
         self.merge_server.stop()
         self.merge_server.join()
         self.merge_client.stop()
-        self.worker.shutdown()
+        self.launch_server.stop()
         self.sched.stop()
         self.sched.join()
         self.statsd.stop()
@@ -1189,9 +1520,12 @@
         self.rpc.stop()
         self.rpc.join()
         self.gearman_server.shutdown()
+        self.fake_nodepool.stop()
+        self.zk.disconnect()
         threads = threading.enumerate()
         if len(threads) > 1:
             self.log.error("More than one thread is running: %s" % threads)
+        self.printHistory()
 
     def init_repo(self, project):
         parts = project.split('/')
@@ -1201,25 +1535,17 @@
         path = os.path.join(self.upstream_root, project)
         repo = git.Repo.init(path)
 
-        repo.config_writer().set_value('user', 'email', 'user@example.com')
-        repo.config_writer().set_value('user', 'name', 'User Name')
-        repo.config_writer().write()
+        with repo.config_writer() as config_writer:
+            config_writer.set_value('user', 'email', 'user@example.com')
+            config_writer.set_value('user', 'name', 'User Name')
 
-        fn = os.path.join(path, 'README')
-        f = open(fn, 'w')
-        f.write("test\n")
-        f.close()
-        repo.index.add([fn])
         repo.index.commit('initial commit')
         master = repo.create_head('master')
-        repo.create_tag('init')
 
         repo.head.reference = master
         zuul.merger.merger.reset_repo_to_head(repo)
         repo.git.clean('-x', '-f', '-d')
 
-        self.create_branch(project, 'mp')
-
     def create_branch(self, project, branch):
         path = os.path.join(self.upstream_root, project)
         repo = git.Repo.init(path)
@@ -1248,56 +1574,6 @@
         commit = repo.index.commit('Creating a fake commit')
         return commit.hexsha
 
-    def ref_has_change(self, ref, change):
-        path = os.path.join(self.git_root, change.project)
-        repo = git.Repo(path)
-        try:
-            for commit in repo.iter_commits(ref):
-                if commit.message.strip() == ('%s-1' % change.subject):
-                    return True
-        except GitCommandError:
-            pass
-        return False
-
-    def job_has_changes(self, *args):
-        job = args[0]
-        commits = args[1:]
-        if isinstance(job, FakeBuild):
-            parameters = job.parameters
-        else:
-            parameters = json.loads(job.arguments)
-        project = parameters['ZUUL_PROJECT']
-        path = os.path.join(self.git_root, project)
-        repo = git.Repo(path)
-        ref = parameters['ZUUL_REF']
-        sha = parameters['ZUUL_COMMIT']
-        repo_messages = [c.message.strip() for c in repo.iter_commits(ref)]
-        repo_shas = [c.hexsha for c in repo.iter_commits(ref)]
-        commit_messages = ['%s-1' % commit.subject for commit in commits]
-        self.log.debug("Checking if job %s has changes; commit_messages %s;"
-                       " repo_messages %s; sha %s" % (job, commit_messages,
-                                                      repo_messages, sha))
-        for msg in commit_messages:
-            if msg not in repo_messages:
-                self.log.debug("  messages do not match")
-                return False
-        if repo_shas[0] != sha:
-            self.log.debug("  sha does not match")
-            return False
-        self.log.debug("  OK")
-        return True
-
-    def registerJobs(self):
-        count = 0
-        for job in self.sched.layout.jobs.keys():
-            self.worker.registerFunction('build:' + job)
-            count += 1
-        self.worker.registerFunction('stop:' + self.worker.worker_id)
-        count += 1
-
-        while len(self.gearman_server.functions) < count:
-            time.sleep(0)
-
     def orderedRelease(self):
         # Run one build at a time to ensure non-race order:
         while len(self.builds):
@@ -1319,47 +1595,31 @@
             parameters = json.loads(job.arguments)
             return parameters[name]
 
-    def resetGearmanServer(self):
-        self.worker.setFunctions([])
-        while True:
-            done = True
-            for connection in self.gearman_server.active_connections:
-                if (connection.functions and
-                    connection.client_id not in ['Zuul RPC Listener',
-                                                 'Zuul Merger']):
-                    done = False
-            if done:
-                break
-            time.sleep(0)
-        self.gearman_server.functions = set()
-        self.rpc.register()
-        self.merge_server.register()
-
     def haveAllBuildsReported(self):
         # See if Zuul is waiting on a meta job to complete
-        if self.launcher.meta_jobs:
+        if self.launch_client.meta_jobs:
             return False
         # Find out if every build that the worker has completed has been
         # reported back to Zuul.  If it hasn't then that means a Gearman
         # event is still in transit and the system is not stable.
-        for build in self.worker.build_history:
-            zbuild = self.launcher.builds.get(build.uuid)
+        for build in self.history:
+            zbuild = self.launch_client.builds.get(build.uuid)
             if not zbuild:
                 # It has already been reported
                 continue
             # It hasn't been reported yet.
             return False
         # Make sure that none of the worker connections are in GRAB_WAIT
-        for connection in self.worker.active_connections:
+        for connection in self.launch_server.worker.active_connections:
             if connection.state == 'GRAB_WAIT':
                 return False
         return True
 
     def areAllBuildsWaiting(self):
-        builds = self.launcher.builds.values()
+        builds = self.launch_client.builds.values()
         for build in builds:
             client_job = None
-            for conn in self.launcher.gearman.active_connections:
+            for conn in self.launch_client.gearman.active_connections:
                 for j in conn.related_jobs.values():
                     if j.unique == build.uuid:
                         client_job = j
@@ -1381,21 +1641,28 @@
                 return False
             if server_job.waiting:
                 continue
-            worker_job = self.worker.gearman_jobs.get(server_job.unique)
-            if worker_job:
-                if build.number is None:
-                    self.log.debug("%s has not reported start" % worker_job)
-                    return False
-                if worker_job.build.isWaiting():
+            if build.url is None:
+                self.log.debug("%s has not reported start" % build)
+                return False
+            worker_build = self.launch_server.job_builds.get(server_job.unique)
+            if worker_build:
+                if worker_build.isWaiting():
                     continue
                 else:
-                    self.log.debug("%s is running" % worker_job)
+                    self.log.debug("%s is running" % worker_build)
                     return False
             else:
                 self.log.debug("%s is unassigned" % server_job)
                 return False
         return True
 
+    def areAllNodeRequestsComplete(self):
+        if self.fake_nodepool.paused:
+            return True
+        if self.sched.nodepool.requests:
+            return False
+        return True
+
     def eventQueuesEmpty(self):
         for queue in self.event_queues:
             yield queue.empty()
@@ -1408,59 +1675,74 @@
         self.log.debug("Waiting until settled...")
         start = time.time()
         while True:
-            if time.time() - start > 10:
-                self.log.debug("Queue status:")
+            if time.time() - start > self.wait_timeout:
+                self.log.error("Timeout waiting for Zuul to settle")
+                self.log.error("Queue status:")
                 for queue in self.event_queues:
-                    self.log.debug("  %s: %s" % (queue, queue.empty()))
-                self.log.debug("All builds waiting: %s" %
+                    self.log.error("  %s: %s" % (queue, queue.empty()))
+                self.log.error("All builds waiting: %s" %
                                (self.areAllBuildsWaiting(),))
+                self.log.error("All builds reported: %s" %
+                               (self.haveAllBuildsReported(),))
+                self.log.error("All requests completed: %s" %
+                               (self.areAllNodeRequestsComplete(),))
+                self.log.error("Merge client jobs: %s" %
+                               (self.merge_client.jobs,))
                 raise Exception("Timeout waiting for Zuul to settle")
             # Make sure no new events show up while we're checking
-            self.worker.lock.acquire()
+
+            self.launch_server.lock.acquire()
             # have all build states propogated to zuul?
             if self.haveAllBuildsReported():
                 # Join ensures that the queue is empty _and_ events have been
                 # processed
                 self.eventQueuesJoin()
                 self.sched.run_handler_lock.acquire()
-                if (not self.merge_client.build_sets and
-                    all(self.eventQueuesEmpty()) and
+                if (not self.merge_client.jobs and
                     self.haveAllBuildsReported() and
-                    self.areAllBuildsWaiting()):
+                    self.areAllBuildsWaiting() and
+                    self.areAllNodeRequestsComplete() and
+                    all(self.eventQueuesEmpty())):
+                    # The queue empty check is placed at the end to
+                    # ensure that if a component adds an event between
+                    # when locked the run handler and checked that the
+                    # components were stable, we don't erroneously
+                    # report that we are settled.
                     self.sched.run_handler_lock.release()
-                    self.worker.lock.release()
+                    self.launch_server.lock.release()
                     self.log.debug("...settled.")
                     return
                 self.sched.run_handler_lock.release()
-            self.worker.lock.release()
+            self.launch_server.lock.release()
             self.sched.wake_event.wait(0.1)
 
     def countJobResults(self, jobs, result):
         jobs = filter(lambda x: x.result == result, jobs)
         return len(jobs)
 
-    def getJobFromHistory(self, name):
-        history = self.worker.build_history
-        for job in history:
-            if job.name == name:
+    def getJobFromHistory(self, name, project=None):
+        for job in self.history:
+            if (job.name == name and
+                (project is None or
+                 job.parameters['ZUUL_PROJECT'] == project)):
                 return job
         raise Exception("Unable to find job %s in history" % name)
 
     def assertEmptyQueues(self):
         # Make sure there are no orphaned jobs
-        for pipeline in self.sched.layout.pipelines.values():
-            for queue in pipeline.queues:
-                if len(queue.queue) != 0:
-                    print('pipeline %s queue %s contents %s' % (
-                        pipeline.name, queue.name, queue.queue))
-                self.assertEqual(len(queue.queue), 0,
-                                 "Pipelines queues should be empty")
+        for tenant in self.sched.abide.tenants.values():
+            for pipeline in tenant.layout.pipelines.values():
+                for queue in pipeline.queues:
+                    if len(queue.queue) != 0:
+                        print('pipeline %s queue %s contents %s' % (
+                            pipeline.name, queue.name, queue.queue))
+                    self.assertEqual(len(queue.queue), 0,
+                                     "Pipelines queues should be empty")
 
     def assertReportedStat(self, key, value=None, kind=None):
         start = time.time()
         while time.time() < (start + 5):
             for stat in self.statsd.stats:
-                pprint.pprint(self.statsd.stats)
                 k, v = stat.split(':')
                 if key == k:
                     if value is None and kind is None:
@@ -1473,9 +1755,183 @@
                             return
             time.sleep(0.1)
 
-        pprint.pprint(self.statsd.stats)
         raise Exception("Key %s not found in reported stats" % key)
 
+    def assertBuilds(self, builds):
+        """Assert that the running builds are as described.
+
+        The list of running builds is examined and must match exactly
+        the list of builds described by the input.
+
+        :arg list builds: A list of dictionaries.  Each item in the
+            list must match the corresponding build in the build
+            history, and each element of the dictionary must match the
+            corresponding attribute of the build.
+
+        """
+        try:
+            self.assertEqual(len(self.builds), len(builds))
+            for i, d in enumerate(builds):
+                for k, v in d.items():
+                    self.assertEqual(
+                        getattr(self.builds[i], k), v,
+                        "Element %i in builds does not match" % (i,))
+        except Exception:
+            for build in self.builds:
+                self.log.error("Running build: %s" % build)
+            else:
+                self.log.error("No running builds")
+            raise
+
+    def assertHistory(self, history, ordered=True):
+        """Assert that the completed builds are as described.
+
+        The list of completed builds is examined and must match
+        exactly the list of builds described by the input.
+
+        :arg list history: A list of dictionaries.  Each item in the
+            list must match the corresponding build in the build
+            history, and each element of the dictionary must match the
+            corresponding attribute of the build.
+
+        :arg bool ordered: If true, the history must match the order
+            supplied, if false, the builds are permitted to have
+            arrived in any order.
+
+        """
+        def matches(history_item, item):
+            for k, v in item.items():
+                if getattr(history_item, k) != v:
+                    return False
+            return True
+        try:
+            self.assertEqual(len(self.history), len(history))
+            if ordered:
+                for i, d in enumerate(history):
+                    if not matches(self.history[i], d):
+                        raise Exception(
+                            "Element %i in history does not match" % (i,))
+            else:
+                unseen = self.history[:]
+                for i, d in enumerate(history):
+                    found = False
+                    for unseen_item in unseen:
+                        if matches(unseen_item, d):
+                            found = True
+                            unseen.remove(unseen_item)
+                            break
+                    if not found:
+                        raise Exception("No match found for element %i "
+                                        "in history" % (i,))
+                if unseen:
+                    raise Exception("Unexpected items in history")
+        except Exception:
+            for build in self.history:
+                self.log.error("Completed build: %s" % build)
+            else:
+                self.log.error("No completed builds")
+            raise
+
+    def printHistory(self):
+        """Log the build history.
+
+        This can be useful during tests to summarize what jobs have
+        completed.
+
+        """
+        self.log.debug("Build history:")
+        for build in self.history:
+            self.log.debug(build)
+
+    def getPipeline(self, name):
+        return self.sched.abide.tenants.values()[0].layout.pipelines.get(name)
+
+    def updateConfigLayout(self, path):
+        root = os.path.join(self.test_root, "config")
+        if not os.path.exists(root):
+            os.makedirs(root)
+        f = tempfile.NamedTemporaryFile(dir=root, delete=False)
+        f.write("""
+- tenant:
+    name: openstack
+    source:
+      gerrit:
+        config-repos:
+          - %s
+        """ % path)
+        f.close()
+        self.config.set('zuul', 'tenant_config',
+                        os.path.join(FIXTURE_DIR, f.name))
+
+    def addCommitToRepo(self, project, message, files,
+                        branch='master', tag=None):
+        path = os.path.join(self.upstream_root, project)
+        repo = git.Repo(path)
+        repo.head.reference = branch
+        zuul.merger.merger.reset_repo_to_head(repo)
+        for fn, content in files.items():
+            fn = os.path.join(path, fn)
+            try:
+                os.makedirs(os.path.dirname(fn))
+            except OSError:
+                pass
+            with open(fn, 'w') as f:
+                f.write(content)
+            repo.index.add([fn])
+        commit = repo.index.commit(message)
+        before = repo.heads[branch].commit
+        repo.heads[branch].commit = commit
+        repo.head.reference = branch
+        repo.git.clean('-x', '-f', '-d')
+        repo.heads[branch].checkout()
+        if tag:
+            repo.create_tag(tag)
+        return before
+
+    def commitLayoutUpdate(self, orig_name, source_name):
+        source_path = os.path.join(self.test_root, 'upstream',
+                                   source_name, 'zuul.yaml')
+        with open(source_path, 'r') as nt:
+            before = self.addCommitToRepo(
+                orig_name, 'Pulling content from %s' % source_name,
+                {'zuul.yaml': nt.read()})
+        return before
+
+    def addEvent(self, connection, event):
+        """Inject a Fake (Gerrit) event.
+
+        This method accepts a JSON-encoded event and simulates Zuul
+        having received it from Gerrit.  It could (and should)
+        eventually apply to any connection type, but is currently only
+        used with Gerrit connections.  The name of the connection is
+        used to look up the corresponding server, and the event is
+        simulated as having been received by all Zuul connections
+        attached to that server.  So if two Gerrit connections in Zuul
+        are connected to the same Gerrit server, and you invoke this
+        method specifying the name of one of them, the event will be
+        received by both.
+
+        .. note::
+
+            "self.fake_gerrit.addEvent" calls should be migrated to
+            this method.
+
+        :arg str connection: The name of the connection corresponding
+        to the gerrit server.
+        :arg str event: The JSON-encoded event.
+
+        """
+        specified_conn = self.connections.connections[connection]
+        for conn in self.connections.connections.values():
+            if (isinstance(conn, specified_conn.__class__) and
+                specified_conn.server == conn.server):
+                conn.addEvent(event)
+
+
+class AnsibleZuulTestCase(ZuulTestCase):
+    """ZuulTestCase but with an actual ansible launcher running"""
+    run_ansible = True
+
 
 class ZuulDBTestCase(ZuulTestCase):
     def setup_config(self, config_file='zuul-connections-same-gerrit.conf'):
diff --git a/tests/fixtures/config/ansible/git/bare-role/tasks/main.yaml b/tests/fixtures/config/ansible/git/bare-role/tasks/main.yaml
new file mode 100644
index 0000000..75943b1
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/bare-role/tasks/main.yaml
@@ -0,0 +1,3 @@
+- file:
+    path: "{{zuul._test.test_root}}/{{zuul.uuid}}.bare-role.flag"
+    state: touch
diff --git a/tests/fixtures/config/ansible/git/common-config/playbooks/post.yaml b/tests/fixtures/config/ansible/git/common-config/playbooks/post.yaml
new file mode 100644
index 0000000..2e512b1
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/common-config/playbooks/post.yaml
@@ -0,0 +1,5 @@
+- hosts: all
+  tasks:
+    - file:
+        path: "{{zuul._test.test_root}}/{{zuul.uuid}}.post.flag"
+        state: touch
diff --git a/tests/fixtures/config/ansible/git/common-config/playbooks/pre.yaml b/tests/fixtures/config/ansible/git/common-config/playbooks/pre.yaml
new file mode 100644
index 0000000..f4222ff
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/common-config/playbooks/pre.yaml
@@ -0,0 +1,5 @@
+- hosts: all
+  tasks:
+    - file:
+        path: "{{zuul._test.test_root}}/{{zuul.uuid}}.pre.flag"
+        state: touch
diff --git a/tests/fixtures/config/ansible/git/common-config/playbooks/python27.yaml b/tests/fixtures/config/ansible/git/common-config/playbooks/python27.yaml
new file mode 100644
index 0000000..45acb87
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/common-config/playbooks/python27.yaml
@@ -0,0 +1,10 @@
+- hosts: all
+  tasks:
+    - file:
+        path: "{{flagpath}}"
+        state: touch
+    - copy:
+        src: "{{zuul._test.test_root}}/{{zuul.uuid}}.flag"
+        dest: "{{zuul._test.test_root}}/{{zuul.uuid}}.copied"
+  roles:
+    - bare-role
diff --git a/tests/fixtures/config/ansible/git/common-config/playbooks/timeout.yaml b/tests/fixtures/config/ansible/git/common-config/playbooks/timeout.yaml
new file mode 100644
index 0000000..4af20eb
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/common-config/playbooks/timeout.yaml
@@ -0,0 +1,4 @@
+- hosts: all
+  tasks:
+    - name: Pause for 60 seconds, so zuul aborts our job.
+      shell: sleep 60
diff --git a/tests/fixtures/config/ansible/git/common-config/zuul.yaml b/tests/fixtures/config/ansible/git/common-config/zuul.yaml
new file mode 100644
index 0000000..30148f0
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/common-config/zuul.yaml
@@ -0,0 +1,51 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- job:
+    name: python27
+    pre-run: pre
+    post-run: post
+    vars:
+      flagpath: "{{zuul._test.test_root}}/{{zuul.uuid}}.flag"
+    roles:
+      - zuul: bare-role
+
+- job:
+    parent: python27
+    name: timeout
+    timeout: 1
diff --git a/tests/fixtures/config/ansible/git/org_project/.zuul.yaml b/tests/fixtures/config/ansible/git/org_project/.zuul.yaml
new file mode 100644
index 0000000..c76ba70
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/org_project/.zuul.yaml
@@ -0,0 +1,12 @@
+- job:
+    parent: python27
+    name: faillocal
+
+- project:
+    name: org/project
+
+    check:
+      jobs:
+        - python27
+        - faillocal
+        - timeout
diff --git a/tests/fixtures/config/ansible/git/org_project/README b/tests/fixtures/config/ansible/git/org_project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/org_project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/ansible/git/org_project/playbooks/faillocal.yaml b/tests/fixtures/config/ansible/git/org_project/playbooks/faillocal.yaml
new file mode 100644
index 0000000..6689e18
--- /dev/null
+++ b/tests/fixtures/config/ansible/git/org_project/playbooks/faillocal.yaml
@@ -0,0 +1,5 @@
+- hosts: all
+  tasks:
+    - copy:
+        src: "{{zuul._test.test_root}}/{{zuul.uuid}}.flag"
+        dest: "{{zuul._test.test_root}}/{{zuul.uuid}}.failed"
diff --git a/tests/fixtures/config/ansible/main.yaml b/tests/fixtures/config/ansible/main.yaml
new file mode 100644
index 0000000..8df99f4
--- /dev/null
+++ b/tests/fixtures/config/ansible/main.yaml
@@ -0,0 +1,9 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
+        project-repos:
+          - org/project
+          - bare-role
diff --git a/tests/fixtures/config/duplicate-pipeline/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/duplicate-pipeline/git/common-config/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/duplicate-pipeline/git/common-config/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/duplicate-pipeline/git/common-config/zuul.yaml b/tests/fixtures/config/duplicate-pipeline/git/common-config/zuul.yaml
new file mode 100755
index 0000000..bc88b06
--- /dev/null
+++ b/tests/fixtures/config/duplicate-pipeline/git/common-config/zuul.yaml
@@ -0,0 +1,46 @@
+- pipeline:
+    name: dup1
+    manager: independent
+    success-message: Build succeeded (dup1).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: change-restored
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: dup2
+    manager: independent
+    success-message: Build succeeded (dup2).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: change-restored
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project-test1
+
+- project:
+    name: org/project
+    dup1:
+      queue: integrated
+      jobs:
+        - project-test1
+
+    dup2:
+      queue: integrated
+      jobs:
+        - project-test1
diff --git a/tests/fixtures/config/duplicate-pipeline/git/org_project/README b/tests/fixtures/config/duplicate-pipeline/git/org_project/README
new file mode 100755
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/duplicate-pipeline/git/org_project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/duplicate-pipeline/main.yaml b/tests/fixtures/config/duplicate-pipeline/main.yaml
new file mode 100755
index 0000000..ba2d8f5
--- /dev/null
+++ b/tests/fixtures/config/duplicate-pipeline/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-duplicate
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/git-driver/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/git-driver/git/common-config/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/git-driver/git/common-config/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/git-driver/git/common-config/zuul.yaml b/tests/fixtures/config/git-driver/git/common-config/zuul.yaml
new file mode 100644
index 0000000..0e332e4
--- /dev/null
+++ b/tests/fixtures/config/git-driver/git/common-config/zuul.yaml
@@ -0,0 +1,22 @@
+- pipeline:
+    name: check
+    manager: independent
+    source: gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project-test1
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test1
diff --git a/tests/fixtures/config/git-driver/git/org_project/README b/tests/fixtures/config/git-driver/git/org_project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/git-driver/git/org_project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/git-driver/main.yaml b/tests/fixtures/config/git-driver/main.yaml
new file mode 100644
index 0000000..5b9b3d9
--- /dev/null
+++ b/tests/fixtures/config/git-driver/main.yaml
@@ -0,0 +1,9 @@
+- tenant:
+    name: tenant-one
+    source:
+      git:
+        config-repos:
+          - common-config
+      gerrit:
+        project-repos:
+          - org/project
diff --git a/tests/fixtures/config/in-repo/git/common-config/zuul.yaml b/tests/fixtures/config/in-repo/git/common-config/zuul.yaml
new file mode 100644
index 0000000..58b2051
--- /dev/null
+++ b/tests/fixtures/config/in-repo/git/common-config/zuul.yaml
@@ -0,0 +1,37 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: tenant-one-gate
+    manager: dependent
+    success-message: Build succeeded (tenant-one-gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
diff --git a/tests/fixtures/config/in-repo/git/org_project/.zuul.yaml b/tests/fixtures/config/in-repo/git/org_project/.zuul.yaml
new file mode 100644
index 0000000..d6f083d
--- /dev/null
+++ b/tests/fixtures/config/in-repo/git/org_project/.zuul.yaml
@@ -0,0 +1,8 @@
+- job:
+    name: project-test1
+
+- project:
+    name: org/project
+    tenant-one-gate:
+      jobs:
+        - project-test1
diff --git a/tests/fixtures/config/in-repo/git/org_project/README b/tests/fixtures/config/in-repo/git/org_project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/in-repo/git/org_project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/in-repo/git/org_project/playbooks/project-test1.yaml b/tests/fixtures/config/in-repo/git/org_project/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/in-repo/git/org_project/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/in-repo/main.yaml b/tests/fixtures/config/in-repo/main.yaml
new file mode 100644
index 0000000..d9868fa
--- /dev/null
+++ b/tests/fixtures/config/in-repo/main.yaml
@@ -0,0 +1,8 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
+        project-repos:
+          - org/project
diff --git a/tests/fixtures/config/merge-modes/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/merge-modes/git/common-config/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/merge-modes/git/common-config/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/merges/git/common-config/playbooks/project-merge.yaml b/tests/fixtures/config/merges/git/common-config/playbooks/project-merge.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/merges/git/common-config/playbooks/project-merge.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/merges/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/merges/git/common-config/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/merges/git/common-config/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/merges/git/common-config/playbooks/project-test2.yaml b/tests/fixtures/config/merges/git/common-config/playbooks/project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/merges/git/common-config/playbooks/project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/merges/git/common-config/zuul.yaml b/tests/fixtures/config/merges/git/common-config/zuul.yaml
new file mode 100644
index 0000000..bb91f3a
--- /dev/null
+++ b/tests/fixtures/config/merges/git/common-config/zuul.yaml
@@ -0,0 +1,80 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- job:
+    name:
+      project-test1
+
+- job:
+    name:
+      project-test2
+
+- job:
+    name:
+      project-merge
+    hold-following-changes: true
+
+- project:
+    name: org/project-merge
+    merge-mode: merge
+    gate:
+      jobs:
+        - project-test1
+
+- project:
+    name: org/project-merge-resolve
+    merge-mode: merge-resolve
+    gate:
+      jobs:
+        - project-test1
+
+- project:
+    name: org/project-cherry-pick
+    merge-mode: cherry-pick
+    gate:
+      jobs:
+        - project-test1
+
+- project:
+    name: org/project-merge-branches
+    merge-mode: cherry-pick
+    gate:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
diff --git a/tests/fixtures/config/merges/git/org_project-cherry-pick/README b/tests/fixtures/config/merges/git/org_project-cherry-pick/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/merges/git/org_project-cherry-pick/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/merges/git/org_project-merge-branches/README b/tests/fixtures/config/merges/git/org_project-merge-branches/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/merges/git/org_project-merge-branches/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/merges/git/org_project-merge-resolve/README b/tests/fixtures/config/merges/git/org_project-merge-resolve/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/merges/git/org_project-merge-resolve/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/merges/git/org_project-merge/README b/tests/fixtures/config/merges/git/org_project-merge/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/merges/git/org_project-merge/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/merges/main.yaml b/tests/fixtures/config/merges/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/merges/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/multi-tenant/git/common-config/playbooks/python27.yaml b/tests/fixtures/config/multi-tenant/git/common-config/playbooks/python27.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/git/common-config/playbooks/python27.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/multi-tenant/git/common-config/zuul.yaml b/tests/fixtures/config/multi-tenant/git/common-config/zuul.yaml
new file mode 100644
index 0000000..08117d6
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/git/common-config/zuul.yaml
@@ -0,0 +1,21 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name:
+      python27
+    nodes:
+      - name: controller
+        image: ubuntu-trusty
diff --git a/tests/fixtures/config/multi-tenant/git/org_project1/README b/tests/fixtures/config/multi-tenant/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/multi-tenant/git/org_project2/README b/tests/fixtures/config/multi-tenant/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/multi-tenant/git/tenant-one-config/playbooks/project1-test1.yaml b/tests/fixtures/config/multi-tenant/git/tenant-one-config/playbooks/project1-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/git/tenant-one-config/playbooks/project1-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/multi-tenant/git/tenant-one-config/zuul.yaml b/tests/fixtures/config/multi-tenant/git/tenant-one-config/zuul.yaml
new file mode 100644
index 0000000..4a653f6
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/git/tenant-one-config/zuul.yaml
@@ -0,0 +1,43 @@
+- pipeline:
+    name: tenant-one-gate
+    manager: dependent
+    success-message: Build succeeded (tenant-one-gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- nodeset:
+    name: nodeset1
+    nodes:
+      - name: controller
+        image: controller-image
+
+- job:
+    name:
+      project1-test1
+
+- project:
+    name: org/project1
+    check:
+      jobs:
+        - python27
+        - project1-test1
+    tenant-one-gate:
+      jobs:
+        - python27
+        - project1-test1
diff --git a/tests/fixtures/config/multi-tenant/git/tenant-two-config/playbooks/project2-test1.yaml b/tests/fixtures/config/multi-tenant/git/tenant-two-config/playbooks/project2-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/git/tenant-two-config/playbooks/project2-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/multi-tenant/git/tenant-two-config/zuul.yaml b/tests/fixtures/config/multi-tenant/git/tenant-two-config/zuul.yaml
new file mode 100644
index 0000000..7c79720
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/git/tenant-two-config/zuul.yaml
@@ -0,0 +1,43 @@
+- pipeline:
+    name: tenant-two-gate
+    manager: dependent
+    success-message: Build succeeded (tenant-two-gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- nodeset:
+    name: nodeset1
+    nodes:
+      - name: controller
+        image: controller-image
+
+- job:
+    name:
+      project2-test1
+
+- project:
+    name: org/project2
+    check:
+      jobs:
+        - python27
+        - project2-test1
+    tenant-two-gate:
+      jobs:
+        - python27
+        - project2-test1
diff --git a/tests/fixtures/config/multi-tenant/main.yaml b/tests/fixtures/config/multi-tenant/main.yaml
new file mode 100644
index 0000000..b1c47b1
--- /dev/null
+++ b/tests/fixtures/config/multi-tenant/main.yaml
@@ -0,0 +1,15 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
+          - tenant-one-config
+
+- tenant:
+    name: tenant-two
+    source:
+      gerrit:
+        config-repos:
+          - common-config
+          - tenant-two-config
diff --git a/tests/fixtures/config/one-job-project/git/common-config/playbooks/one-job-project-merge.yaml b/tests/fixtures/config/one-job-project/git/common-config/playbooks/one-job-project-merge.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/one-job-project/git/common-config/playbooks/one-job-project-merge.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/one-job-project/git/common-config/playbooks/one-job-project-post.yaml b/tests/fixtures/config/one-job-project/git/common-config/playbooks/one-job-project-post.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/one-job-project/git/common-config/playbooks/one-job-project-post.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/one-job-project/git/common-config/zuul.yaml b/tests/fixtures/config/one-job-project/git/common-config/zuul.yaml
new file mode 100644
index 0000000..148ba42
--- /dev/null
+++ b/tests/fixtures/config/one-job-project/git/common-config/zuul.yaml
@@ -0,0 +1,66 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- pipeline:
+    name: post
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: ref-updated
+          ref: ^(?!refs/).*$
+
+- job:
+    name: one-job-project-merge
+    hold-following-changes: true
+
+- job:
+    name: one-job-project-post
+
+- project:
+    name: org/one-job-project
+    check:
+      jobs:
+        - one-job-project-merge
+    gate:
+      jobs:
+        - one-job-project-merge
+    post:
+      jobs:
+        - one-job-project-post
diff --git a/tests/fixtures/config/one-job-project/git/org_one-job-project/README b/tests/fixtures/config/one-job-project/git/org_one-job-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/one-job-project/git/org_one-job-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/one-job-project/main.yaml b/tests/fixtures/config/one-job-project/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/one-job-project/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/openstack/git/openstack_keystone/README b/tests/fixtures/config/openstack/git/openstack_keystone/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/openstack/git/openstack_keystone/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/openstack/git/openstack_nova/README b/tests/fixtures/config/openstack/git/openstack_nova/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/openstack/git/openstack_nova/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/openstack/git/project-config/playbooks/base.yaml b/tests/fixtures/config/openstack/git/project-config/playbooks/base.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/openstack/git/project-config/playbooks/base.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/openstack/git/project-config/playbooks/dsvm.yaml b/tests/fixtures/config/openstack/git/project-config/playbooks/dsvm.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/openstack/git/project-config/playbooks/dsvm.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/openstack/git/project-config/playbooks/python27.yaml b/tests/fixtures/config/openstack/git/project-config/playbooks/python27.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/openstack/git/project-config/playbooks/python27.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/openstack/git/project-config/playbooks/python35.yaml b/tests/fixtures/config/openstack/git/project-config/playbooks/python35.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/openstack/git/project-config/playbooks/python35.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/openstack/git/project-config/zuul.yaml b/tests/fixtures/config/openstack/git/project-config/zuul.yaml
new file mode 100644
index 0000000..420d979
--- /dev/null
+++ b/tests/fixtures/config/openstack/git/project-config/zuul.yaml
@@ -0,0 +1,101 @@
+# Pipeline definitions
+
+- pipeline:
+    name: check
+    manager: independent
+    success-message: Build succeeded (check).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+# Job definitions
+
+- job:
+    name: base
+    timeout: 30
+    nodes:
+      - name: controller
+        image: ubuntu-xenial
+
+- job:
+    name: python27
+    parent: base
+
+- job:
+    name: python27
+    parent: base
+    branches: stable/mitaka
+    nodes:
+      - name: controller
+        image: ubuntu-trusty
+
+- job:
+    name: python35
+    parent: base
+
+- project-template:
+    name: python-jobs
+    gate:
+      jobs:
+        - python27
+        - python35
+
+- job:
+    name: dsvm
+    parent: base
+    repos:
+      - openstack/keystone
+      - openstack/nova
+
+# Project definitions
+
+- project:
+    name: openstack/nova
+    templates:
+      - python-jobs
+    check:
+      jobs:
+        - dsvm
+    gate:
+      queue: integrated
+
+- project:
+    name: openstack/keystone
+    templates:
+      - python-jobs
+    check:
+      jobs:
+        - dsvm
+    gate:
+      queue: integrated
diff --git a/tests/fixtures/config/openstack/main.yaml b/tests/fixtures/config/openstack/main.yaml
new file mode 100644
index 0000000..95a0952
--- /dev/null
+++ b/tests/fixtures/config/openstack/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: openstack
+    source:
+      gerrit:
+        config-repos:
+          - project-config
diff --git a/tests/fixtures/config/requirements/email/git/common-config/playbooks/project1-job.yaml b/tests/fixtures/config/requirements/email/git/common-config/playbooks/project1-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/email/git/common-config/playbooks/project1-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/email/git/common-config/playbooks/project2-job.yaml b/tests/fixtures/config/requirements/email/git/common-config/playbooks/project2-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/email/git/common-config/playbooks/project2-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/email/git/common-config/zuul.yaml b/tests/fixtures/config/requirements/email/git/common-config/zuul.yaml
new file mode 100644
index 0000000..09e0cc6
--- /dev/null
+++ b/tests/fixtures/config/requirements/email/git/common-config/zuul.yaml
@@ -0,0 +1,52 @@
+- pipeline:
+    name: pipeline
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+    require:
+      approval:
+        - email: jenkins@example.com
+
+- pipeline:
+    name: trigger
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          require-approval:
+            - email: jenkins@example.com
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project1-job
+
+- job:
+    name: project2-job
+
+- project:
+    name: org/project1
+    pipeline:
+      jobs:
+        - project1-job
+
+- project:
+    name: org/project2
+    trigger:
+      jobs:
+        - project2-job
diff --git a/tests/fixtures/config/requirements/email/git/org_project1/README b/tests/fixtures/config/requirements/email/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/email/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/email/git/org_project2/README b/tests/fixtures/config/requirements/email/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/email/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/email/main.yaml b/tests/fixtures/config/requirements/email/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/email/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/requirements/newer-than/git/common-config/playbooks/project1-job.yaml b/tests/fixtures/config/requirements/newer-than/git/common-config/playbooks/project1-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/newer-than/git/common-config/playbooks/project1-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/newer-than/git/common-config/playbooks/project2-job.yaml b/tests/fixtures/config/requirements/newer-than/git/common-config/playbooks/project2-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/newer-than/git/common-config/playbooks/project2-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/newer-than/git/common-config/zuul.yaml b/tests/fixtures/config/requirements/newer-than/git/common-config/zuul.yaml
new file mode 100644
index 0000000..cd76afd
--- /dev/null
+++ b/tests/fixtures/config/requirements/newer-than/git/common-config/zuul.yaml
@@ -0,0 +1,54 @@
+- pipeline:
+    name: pipeline
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+    require:
+      approval:
+        - username: jenkins
+          newer-than: 48h
+
+- pipeline:
+    name: trigger
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          require-approval:
+            - username: jenkins
+              newer-than: 48h
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project1-job
+
+- job:
+    name: project2-job
+
+- project:
+    name: org/project1
+    pipeline:
+      jobs:
+        - project1-job
+
+- project:
+    name: org/project2
+    trigger:
+      jobs:
+        - project2-job
diff --git a/tests/fixtures/config/requirements/newer-than/git/org_project1/README b/tests/fixtures/config/requirements/newer-than/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/newer-than/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/newer-than/git/org_project2/README b/tests/fixtures/config/requirements/newer-than/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/newer-than/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/newer-than/main.yaml b/tests/fixtures/config/requirements/newer-than/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/newer-than/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/requirements/older-than/git/common-config/playbooks/project1-job.yaml b/tests/fixtures/config/requirements/older-than/git/common-config/playbooks/project1-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/older-than/git/common-config/playbooks/project1-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/older-than/git/common-config/playbooks/project2-job.yaml b/tests/fixtures/config/requirements/older-than/git/common-config/playbooks/project2-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/older-than/git/common-config/playbooks/project2-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/older-than/git/common-config/zuul.yaml b/tests/fixtures/config/requirements/older-than/git/common-config/zuul.yaml
new file mode 100644
index 0000000..8dca5e6
--- /dev/null
+++ b/tests/fixtures/config/requirements/older-than/git/common-config/zuul.yaml
@@ -0,0 +1,54 @@
+- pipeline:
+    name: pipeline
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+    require:
+      approval:
+        - username: jenkins
+          older-than: 48h
+
+- pipeline:
+    name: trigger
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          require-approval:
+            - username: jenkins
+              older-than: 48h
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project1-job
+
+- job:
+    name: project2-job
+
+- project:
+    name: org/project1
+    pipeline:
+      jobs:
+        - project1-job
+
+- project:
+    name: org/project2
+    trigger:
+      jobs:
+        - project2-job
diff --git a/tests/fixtures/config/requirements/older-than/git/org_project1/README b/tests/fixtures/config/requirements/older-than/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/older-than/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/older-than/git/org_project2/README b/tests/fixtures/config/requirements/older-than/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/older-than/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/older-than/main.yaml b/tests/fixtures/config/requirements/older-than/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/older-than/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/requirements/reject-username/git/common-config/playbooks/project1-job.yaml b/tests/fixtures/config/requirements/reject-username/git/common-config/playbooks/project1-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject-username/git/common-config/playbooks/project1-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/reject-username/git/common-config/playbooks/project2-job.yaml b/tests/fixtures/config/requirements/reject-username/git/common-config/playbooks/project2-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject-username/git/common-config/playbooks/project2-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/reject-username/git/common-config/zuul.yaml b/tests/fixtures/config/requirements/reject-username/git/common-config/zuul.yaml
new file mode 100644
index 0000000..92c7de2
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject-username/git/common-config/zuul.yaml
@@ -0,0 +1,52 @@
+- pipeline:
+    name: pipeline
+    manager: independent
+    source:
+      gerrit
+    reject:
+      approval:
+        - username: 'jenkins'
+    trigger:
+      gerrit:
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: trigger
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          reject-approval:
+            - username: 'jenkins'
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project1-job
+
+- job:
+    name: project2-job
+
+- project:
+    name: org/project1
+    pipeline:
+      jobs:
+        - project1-job
+
+- project:
+    name: org/project2
+    trigger:
+      jobs:
+        - project2-job
diff --git a/tests/fixtures/config/requirements/reject-username/git/org_project1/README b/tests/fixtures/config/requirements/reject-username/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject-username/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/reject-username/git/org_project2/README b/tests/fixtures/config/requirements/reject-username/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject-username/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/reject-username/main.yaml b/tests/fixtures/config/requirements/reject-username/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject-username/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/requirements/reject/git/common-config/playbooks/project1-job.yaml b/tests/fixtures/config/requirements/reject/git/common-config/playbooks/project1-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject/git/common-config/playbooks/project1-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/reject/git/common-config/playbooks/project2-job.yaml b/tests/fixtures/config/requirements/reject/git/common-config/playbooks/project2-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject/git/common-config/playbooks/project2-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/layout-requirement-reject.yaml b/tests/fixtures/config/requirements/reject/git/common-config/zuul.yaml
similarity index 62%
rename from tests/fixtures/layout-requirement-reject.yaml
rename to tests/fixtures/config/requirements/reject/git/common-config/zuul.yaml
index 1f5d714..12a2538 100644
--- a/tests/fixtures/layout-requirement-reject.yaml
+++ b/tests/fixtures/config/requirements/reject/git/common-config/zuul.yaml
@@ -1,6 +1,8 @@
-pipelines:
-  - name: pipeline
-    manager: IndependentPipelineManager
+- pipeline:
+    name: pipeline
+    manager: independent
+    source:
+      gerrit
     require:
       approval:
         - username: jenkins
@@ -18,8 +20,11 @@
       gerrit:
         verified: -1
 
-  - name: trigger
-    manager: IndependentPipelineManager
+- pipeline:
+    name: trigger
+    manager: independent
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: comment-added
@@ -35,10 +40,20 @@
       gerrit:
         verified: -1
 
-projects:
-  - name: org/project1
+- job:
+    name: project1-job
+
+- job:
+    name: project2-job
+
+- project:
+    name: org/project1
     pipeline:
-      - project1-pipeline
-  - name: org/project2
+      jobs:
+        - project1-job
+
+- project:
+    name: org/project2
     trigger:
-      - project2-trigger
+      jobs:
+        - project2-job
diff --git a/tests/fixtures/config/requirements/reject/git/org_project1/README b/tests/fixtures/config/requirements/reject/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/reject/git/org_project2/README b/tests/fixtures/config/requirements/reject/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/reject/main.yaml b/tests/fixtures/config/requirements/reject/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/reject/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/requirements/state/git/common-config/playbooks/project-job.yaml b/tests/fixtures/config/requirements/state/git/common-config/playbooks/project-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/state/git/common-config/playbooks/project-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/state/git/common-config/zuul.yaml b/tests/fixtures/config/requirements/state/git/common-config/zuul.yaml
new file mode 100644
index 0000000..9491bff
--- /dev/null
+++ b/tests/fixtures/config/requirements/state/git/common-config/zuul.yaml
@@ -0,0 +1,74 @@
+- pipeline:
+    name: current-check
+    manager: independent
+    source:
+      gerrit
+    require:
+      current-patchset: True
+    trigger:
+      gerrit:
+        - event: patchset-created
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: open-check
+    manager: independent
+    source:
+      gerrit
+    require:
+      open: True
+    trigger:
+      gerrit:
+        - event: patchset-created
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: status-check
+    manager: independent
+    source:
+      gerrit
+    require:
+      status: NEW
+    trigger:
+      gerrit:
+        - event: patchset-created
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project-job
+
+- project:
+    name: current-project
+    current-check:
+      jobs:
+        - project-job
+
+- project:
+    name: open-project
+    open-check:
+      jobs:
+        - project-job
+
+- project:
+    name: status-project
+    status-check:
+      jobs:
+        - project-job
diff --git a/tests/fixtures/config/requirements/state/git/current-project/README b/tests/fixtures/config/requirements/state/git/current-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/state/git/current-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/state/git/open-project/README b/tests/fixtures/config/requirements/state/git/open-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/state/git/open-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/state/git/status-project/README b/tests/fixtures/config/requirements/state/git/status-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/state/git/status-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/state/main.yaml b/tests/fixtures/config/requirements/state/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/state/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/requirements/username/git/common-config/playbooks/project1-job.yaml b/tests/fixtures/config/requirements/username/git/common-config/playbooks/project1-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/username/git/common-config/playbooks/project1-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/username/git/common-config/playbooks/project2-job.yaml b/tests/fixtures/config/requirements/username/git/common-config/playbooks/project2-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/username/git/common-config/playbooks/project2-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/username/git/common-config/zuul.yaml b/tests/fixtures/config/requirements/username/git/common-config/zuul.yaml
new file mode 100644
index 0000000..ca2ff97
--- /dev/null
+++ b/tests/fixtures/config/requirements/username/git/common-config/zuul.yaml
@@ -0,0 +1,52 @@
+- pipeline:
+    name: pipeline
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+    require:
+      approval:
+        - username: ^(jenkins|zuul)$
+
+- pipeline:
+    name: trigger
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          require-approval:
+            - username: jenkins
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project1-job
+
+- job:
+    name: project2-job
+
+- project:
+    name: org/project1
+    pipeline:
+      jobs:
+        - project1-job
+
+- project:
+    name: org/project2
+    trigger:
+      jobs:
+        - project2-job
diff --git a/tests/fixtures/config/requirements/username/git/org_project1/README b/tests/fixtures/config/requirements/username/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/username/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/username/git/org_project2/README b/tests/fixtures/config/requirements/username/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/username/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/username/main.yaml b/tests/fixtures/config/requirements/username/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/username/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/requirements/vote1/git/common-config/playbooks/project1-job.yaml b/tests/fixtures/config/requirements/vote1/git/common-config/playbooks/project1-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote1/git/common-config/playbooks/project1-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/vote1/git/common-config/playbooks/project2-job.yaml b/tests/fixtures/config/requirements/vote1/git/common-config/playbooks/project2-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote1/git/common-config/playbooks/project2-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/vote1/git/common-config/zuul.yaml b/tests/fixtures/config/requirements/vote1/git/common-config/zuul.yaml
new file mode 100644
index 0000000..00afe79
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote1/git/common-config/zuul.yaml
@@ -0,0 +1,55 @@
+
+- pipeline:
+    name: pipeline
+    manager: independent
+    require:
+      approval:
+        - username: jenkins
+          verified: 1
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: trigger
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          require-approval:
+            - username: jenkins
+              verified: 1
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project1-job
+
+- job:
+    name: project2-job
+
+- project:
+    name: org/project1
+    pipeline:
+      jobs:
+        - project1-job
+
+- project:
+    name: org/project2
+    trigger:
+      jobs:
+        - project2-job
diff --git a/tests/fixtures/config/requirements/vote1/git/org_project1/README b/tests/fixtures/config/requirements/vote1/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote1/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/vote1/git/org_project2/README b/tests/fixtures/config/requirements/vote1/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote1/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/vote1/main.yaml b/tests/fixtures/config/requirements/vote1/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote1/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/requirements/vote2/git/common-config/playbooks/project1-job.yaml b/tests/fixtures/config/requirements/vote2/git/common-config/playbooks/project1-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote2/git/common-config/playbooks/project1-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/vote2/git/common-config/playbooks/project2-job.yaml b/tests/fixtures/config/requirements/vote2/git/common-config/playbooks/project2-job.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote2/git/common-config/playbooks/project2-job.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/requirements/vote2/git/common-config/zuul.yaml b/tests/fixtures/config/requirements/vote2/git/common-config/zuul.yaml
new file mode 100644
index 0000000..73db7a7
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote2/git/common-config/zuul.yaml
@@ -0,0 +1,54 @@
+- pipeline:
+    name: pipeline
+    manager: independent
+    require:
+      approval:
+        - username: jenkins
+          verified: [1, 2]
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: trigger
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          require-approval:
+            - username: jenkins
+              verified: [1, 2]
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project1-job
+
+- job:
+    name: project2-job
+
+- project:
+    name: org/project1
+    pipeline:
+      jobs:
+        - project1-job
+
+- project:
+    name: org/project2
+    trigger:
+      jobs:
+        - project2-job
diff --git a/tests/fixtures/config/requirements/vote2/git/org_project1/README b/tests/fixtures/config/requirements/vote2/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote2/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/vote2/git/org_project2/README b/tests/fixtures/config/requirements/vote2/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote2/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/requirements/vote2/main.yaml b/tests/fixtures/config/requirements/vote2/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/requirements/vote2/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/experimental-project-test.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/experimental-project-test.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/experimental-project-test.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-merge.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-merge.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-merge.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-test1.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-test2.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/nonvoting-project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-merge.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-merge.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-merge.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-post.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-post.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-post.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-test2.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-testfile.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-testfile.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project-testfile.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/playbooks/project1-project2-integration.yaml b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project1-project2-integration.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/playbooks/project1-project2-integration.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/common-config/zuul.yaml b/tests/fixtures/config/single-tenant/git/common-config/zuul.yaml
new file mode 100644
index 0000000..b91bf6f
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/common-config/zuul.yaml
@@ -0,0 +1,217 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- pipeline:
+    name: post
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: ref-updated
+          ref: ^(?!refs/).*$
+
+- pipeline:
+    name: experimental
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit: {}
+    failure:
+      gerrit: {}
+
+- job:
+    name: project-merge
+    hold-following-changes: true
+
+- job:
+    name: project-test1
+    attempts: 4
+    nodes:
+      - name: controller
+        image: image1
+
+- job:
+    name: project-test1
+    branches: stable
+    nodes:
+      - name: controller
+        image: image2
+
+- job:
+    name: project-post
+    nodes:
+      - name: static
+        image: ubuntu-xenial
+
+- job:
+    name: project-test2
+
+- job:
+    name: project1-project2-integration
+    queue-name: integration
+
+- job:
+    name: experimental-project-test
+
+- job:
+    name: nonvoting-project-merge
+    hold-following-changes: true
+
+- job:
+    name: nonvoting-project-test1
+
+- job:
+    name: nonvoting-project-test2
+    voting: false
+
+- job:
+    name: project-testfile
+    files:
+      - '.*-requires'
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+    gate:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+              - project-testfile
+    post:
+      jobs:
+        - project-post
+
+- project:
+    name: org/project1
+    check:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+              - project1-project2-integration
+    gate:
+      queue: integrated
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+              - project1-project2-integration
+
+- project:
+    name: org/project2
+    gate:
+      queue: integrated
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+              - project1-project2-integration
+
+- project:
+    name: org/project3
+    check:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+              - project1-project2-integration
+    gate:
+      queue: integrated
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+              - project1-project2-integration
+    post:
+      jobs:
+        - project-post
+
+- project:
+    name: org/experimental-project
+    experimental:
+      jobs:
+        - project-merge:
+            jobs:
+              - experimental-project-test
+
+- project:
+    name: org/noop-project
+    check:
+      jobs:
+        - noop
+    gate:
+      jobs:
+        - noop
+
+- project:
+    name: org/nonvoting-project
+    check:
+      jobs:
+        - nonvoting-project-merge:
+            jobs:
+              - nonvoting-project-test1
+              - nonvoting-project-test2
+    gate:
+      jobs:
+        - nonvoting-project-merge:
+            jobs:
+              - nonvoting-project-test1
+              - nonvoting-project-test2
+
+- project:
+    name: org/no-jobs-project
+    check:
+      jobs:
+        - project-testfile
diff --git a/tests/fixtures/config/single-tenant/git/layout-disabled-at/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-disabled-at/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-disabled-at/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-disabled-at/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-disabled-at/zuul.yaml
new file mode 100644
index 0000000..4cf6f16
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-disabled-at/zuul.yaml
@@ -0,0 +1,30 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+    disabled:
+      smtp:
+        to: you@example.com
+    disable-after-consecutive-failures: 3
+
+- job:
+    name: project-test1
+    nodes:
+      - name: controller
+        image: image1
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test1
diff --git a/tests/fixtures/config/single-tenant/git/layout-dont-ignore-ref-deletes/playbooks/project-post.yaml b/tests/fixtures/config/single-tenant/git/layout-dont-ignore-ref-deletes/playbooks/project-post.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-dont-ignore-ref-deletes/playbooks/project-post.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-dont-ignore-ref-deletes/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-dont-ignore-ref-deletes/zuul.yaml
new file mode 100644
index 0000000..30e574a
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-dont-ignore-ref-deletes/zuul.yaml
@@ -0,0 +1,23 @@
+- pipeline:
+    name: post
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: ref-updated
+          ref: ^(?!refs/).*$
+          ignore-deletes: False
+
+- job:
+    name: project-post
+    nodes:
+      - name: static
+        image: ubuntu-xenial
+
+- project:
+    name: org/project
+    post:
+      jobs:
+        - project-post
+
diff --git a/tests/fixtures/config/single-tenant/git/layout-footer-message/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-footer-message/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-footer-message/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-footer-message/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-footer-message/zuul.yaml
new file mode 100644
index 0000000..0c04070
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-footer-message/zuul.yaml
@@ -0,0 +1,38 @@
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
+    footer-message: For CI problems and help debugging, contact ci@example.org
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      smtp:
+        to: you@example.com
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+      smtp:
+        to: you@example.com
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- job:
+    name: project-test1
+#    success-url: http://logs.exxxample.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}
+- project:
+    name: org/project
+    gate:
+      jobs:
+        - project-test1
+
diff --git a/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-bitrot-stable-old.yaml b/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-bitrot-stable-old.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-bitrot-stable-old.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-bitrot-stable-older.yaml b/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-bitrot-stable-older.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-bitrot-stable-older.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-idle/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-idle/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-idle/zuul.yaml
new file mode 100644
index 0000000..f71f3e4
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-idle/zuul.yaml
@@ -0,0 +1,27 @@
+- pipeline:
+    name: periodic
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      timer:
+        - time: '* * * * * */1'
+
+- job:
+    name: project-bitrot-stable-old
+    nodes:
+      - name: static
+        image: ubuntu-xenial
+
+- job:
+    name: project-bitrot-stable-older
+    nodes:
+      - name: static
+        image: ubuntu-trusty
+
+- project:
+    name: org/project
+    periodic:
+      jobs:
+        - project-bitrot-stable-old
+        - project-bitrot-stable-older
diff --git a/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-irrelevant-starts-empty.yaml b/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-irrelevant-starts-empty.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-irrelevant-starts-empty.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-irrelevant-starts-full.yaml b/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-irrelevant-starts-full.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-irrelevant-starts-full.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-nomatch-starts-empty.yaml b/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-nomatch-starts-empty.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-nomatch-starts-empty.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-nomatch-starts-full.yaml b/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-nomatch-starts-full.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-inheritance/playbooks/project-test-nomatch-starts-full.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-inheritance/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-inheritance/zuul.yaml
new file mode 100644
index 0000000..3070af0
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-inheritance/zuul.yaml
@@ -0,0 +1,46 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+
+- job:
+    name: project-test-irrelevant-starts-empty
+
+- job:
+    name: project-test-irrelevant-starts-full
+    irrelevant-files:
+      - ^README$
+      - ^ignoreme$
+
+- job:
+    name: project-test-nomatch-starts-empty
+
+- job:
+    name: project-test-nomatch-starts-full
+    irrelevant-files:
+      - ^README$
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test-irrelevant-starts-empty:
+            irrelevant-files:
+              - ^README$
+              - ^ignoreme$
+        - project-test-irrelevant-starts-full
+        - project-test-nomatch-starts-empty:
+            irrelevant-files:
+              - ^README$
+        - project-test-nomatch-starts-full
diff --git a/tests/fixtures/config/single-tenant/git/layout-irrelevant-files/playbooks/project-test-irrelevant-files.yaml b/tests/fixtures/config/single-tenant/git/layout-irrelevant-files/playbooks/project-test-irrelevant-files.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-irrelevant-files/playbooks/project-test-irrelevant-files.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-irrelevant-files/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-irrelevant-files/zuul.yaml
new file mode 100644
index 0000000..f243bcc
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-irrelevant-files/zuul.yaml
@@ -0,0 +1,27 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+
+- job:
+    name: project-test-irrelevant-files
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test-irrelevant-files:
+            irrelevant-files:
+              - ^README$
+              - ^ignoreme$
diff --git a/tests/fixtures/config/single-tenant/git/layout-mutex-reconfiguration/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-mutex-reconfiguration/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-mutex-reconfiguration/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-mutex-reconfiguration/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-mutex-reconfiguration/zuul.yaml
new file mode 100644
index 0000000..12f1747
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-mutex-reconfiguration/zuul.yaml
@@ -0,0 +1,23 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project-test1
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test1
diff --git a/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/mutex-one.yaml b/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/mutex-one.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/mutex-one.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/mutex-two.yaml b/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/mutex-two.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/mutex-two.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-mutex/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-mutex/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-mutex/zuul.yaml
new file mode 100644
index 0000000..e91903a
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-mutex/zuul.yaml
@@ -0,0 +1,33 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- job:
+    name: project-test1
+
+- job:
+    name: mutex-one
+    mutex: test-mutex
+
+- job:
+    name: mutex-two
+    mutex: test-mutex
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test1
+        - mutex-one
+        - mutex-two
diff --git a/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-bitrot-stable-old.yaml b/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-bitrot-stable-old.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-bitrot-stable-old.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-bitrot-stable-older.yaml b/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-bitrot-stable-older.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-bitrot-stable-older.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-no-timer/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-no-timer/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-no-timer/zuul.yaml
new file mode 100644
index 0000000..f754e37
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-no-timer/zuul.yaml
@@ -0,0 +1,50 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: periodic
+    manager: independent
+    # Trigger is required, set it to one that is a noop
+    # during tests that check the timer trigger.
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: ref-updated
+
+- job:
+    name: project-test1
+
+- job:
+    name: project-bitrot-stable-old
+    nodes:
+      - name: static
+        image: ubuntu-xenial
+
+- job:
+    name: project-bitrot-stable-older
+    nodes:
+      - name: static
+        image: ubuntu-trusty
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test1
+    periodic:
+      jobs:
+        - project-bitrot-stable-old
+        - project-bitrot-stable-older
diff --git a/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-merge.yaml b/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-merge.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-merge.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-test2.yaml b/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-repo-deleted/playbooks/project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-repo-deleted/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-repo-deleted/zuul.yaml
new file mode 100644
index 0000000..2bffc3e
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-repo-deleted/zuul.yaml
@@ -0,0 +1,72 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- job:
+    name: project-merge
+    hold-following-changes: true
+
+- job:
+    name: project-test1
+    nodes:
+      - name: controller
+        image: image1
+
+- job:
+    name: project-test1
+    branches: stable
+    nodes:
+      - name: controller
+        image: image2
+
+- job:
+    name: project-test2
+
+- project:
+    name: org/delete-project
+    check:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+    gate:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
diff --git a/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/experimental-project-test.yaml b/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/experimental-project-test.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/experimental-project-test.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-merge.yaml b/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-merge.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-merge.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-test2.yaml b/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-smtp/playbooks/project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-smtp/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-smtp/zuul.yaml
new file mode 100644
index 0000000..9effb1f
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-smtp/zuul.yaml
@@ -0,0 +1,81 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    start:
+      smtp:
+        to: you@example.com
+    success:
+      gerrit:
+        verified: 1
+      smtp:
+        to: alternative_me@example.com
+        from: zuul_from@example.com
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- job:
+    name: project-merge
+    hold-following-changes: true
+
+- job:
+    name: project-test1
+    nodes:
+      - name: controller
+        image: image1
+
+- job:
+    name: project-test1
+    branches: stable
+    nodes:
+      - name: controller
+        image: image2
+
+- job:
+    name: project-test2
+
+- job:
+    name: experimental-project-test
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
+    gate:
+      jobs:
+        - project-merge:
+            jobs:
+              - project-test1
+              - project-test2
diff --git a/tests/fixtures/config/single-tenant/git/layout-timer-smtp/playbooks/project-bitrot-stable-old.yaml b/tests/fixtures/config/single-tenant/git/layout-timer-smtp/playbooks/project-bitrot-stable-old.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-timer-smtp/playbooks/project-bitrot-stable-old.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-timer-smtp/playbooks/project-bitrot-stable-older.yaml b/tests/fixtures/config/single-tenant/git/layout-timer-smtp/playbooks/project-bitrot-stable-older.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-timer-smtp/playbooks/project-bitrot-stable-older.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-timer-smtp/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-timer-smtp/zuul.yaml
new file mode 100644
index 0000000..4a14107
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-timer-smtp/zuul.yaml
@@ -0,0 +1,28 @@
+- pipeline:
+    name: periodic
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      timer:
+        - time: '* * * * * */1'
+    success:
+      smtp:
+        to: alternative_me@example.com
+        from: zuul_from@example.com
+        subject: 'Periodic check for {change.project} succeeded'
+
+- job:
+    name: project-bitrot-stable-old
+    success-url: http://logs.example.com/{job.name}/{build.number}
+
+- job:
+    name: project-bitrot-stable-older
+    success-url: http://logs.example.com/{job.name}/{build.number}
+
+- project:
+    name: org/project
+    periodic:
+      jobs:
+        - project-bitrot-stable-old
+        - project-bitrot-stable-older
diff --git a/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-bitrot-stable-old.yaml b/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-bitrot-stable-old.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-bitrot-stable-old.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-bitrot-stable-older.yaml b/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-bitrot-stable-older.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-bitrot-stable-older.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-test1.yaml b/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-test2.yaml b/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-timer/playbooks/project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/single-tenant/git/layout-timer/zuul.yaml b/tests/fixtures/config/single-tenant/git/layout-timer/zuul.yaml
new file mode 100644
index 0000000..f69a91d
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/layout-timer/zuul.yaml
@@ -0,0 +1,52 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: periodic
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      timer:
+        - time: '* * * * * */1'
+
+- job:
+    name: project-test1
+
+- job:
+    name: project-test2
+
+- job:
+    name: project-bitrot-stable-old
+    nodes:
+      - name: static
+        image: ubuntu-xenial
+
+- job:
+    name: project-bitrot-stable-older
+    nodes:
+      - name: static
+        image: ubuntu-trusty
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test1
+        - project-test2
+    periodic:
+      jobs:
+        - project-bitrot-stable-old
+        - project-bitrot-stable-older
diff --git a/tests/fixtures/config/single-tenant/git/org_delete-project/README b/tests/fixtures/config/single-tenant/git/org_delete-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_delete-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/single-tenant/git/org_experimental-project/README b/tests/fixtures/config/single-tenant/git/org_experimental-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_experimental-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/single-tenant/git/org_no-jobs-project/README b/tests/fixtures/config/single-tenant/git/org_no-jobs-project/README
new file mode 100644
index 0000000..44f3bac
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_no-jobs-project/README
@@ -0,0 +1 @@
+staypuft
diff --git a/tests/fixtures/config/single-tenant/git/org_nonvoting-project/README b/tests/fixtures/config/single-tenant/git/org_nonvoting-project/README
new file mode 100644
index 0000000..2cc3865
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_nonvoting-project/README
@@ -0,0 +1 @@
+dont tread on me
diff --git a/tests/fixtures/config/single-tenant/git/org_noop-project/README b/tests/fixtures/config/single-tenant/git/org_noop-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_noop-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/single-tenant/git/org_project/README b/tests/fixtures/config/single-tenant/git/org_project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/single-tenant/git/org_project1/README b/tests/fixtures/config/single-tenant/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/single-tenant/git/org_project2/README b/tests/fixtures/config/single-tenant/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/single-tenant/git/org_project3/README b/tests/fixtures/config/single-tenant/git/org_project3/README
new file mode 100644
index 0000000..234496b
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_project3/README
@@ -0,0 +1 @@
+third
diff --git a/tests/fixtures/config/single-tenant/git/org_unknown/README b/tests/fixtures/config/single-tenant/git/org_unknown/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/git/org_unknown/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/single-tenant/main.yaml b/tests/fixtures/config/single-tenant/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/single-tenant/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/success-url/git/common-config/playbooks/docs-draft-test.yaml b/tests/fixtures/config/success-url/git/common-config/playbooks/docs-draft-test.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/success-url/git/common-config/playbooks/docs-draft-test.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/success-url/git/common-config/playbooks/docs-draft-test2.yaml b/tests/fixtures/config/success-url/git/common-config/playbooks/docs-draft-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/success-url/git/common-config/playbooks/docs-draft-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/success-url/git/common-config/zuul.yaml b/tests/fixtures/config/success-url/git/common-config/zuul.yaml
new file mode 100644
index 0000000..7edb340
--- /dev/null
+++ b/tests/fixtures/config/success-url/git/common-config/zuul.yaml
@@ -0,0 +1,35 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    start:
+      smtp:
+        to: alternative_me@example.com
+    success:
+      gerrit:
+        verified: 1
+      smtp:
+        to: alternative_me@example.com
+    failure:
+      gerrit:
+        verified: -1
+
+
+- job:
+    name: docs-draft-test
+    success-url: http://docs-draft.example.org/{build.parameters[LOG_PATH]}/publish-docs/
+
+- job:
+    name: docs-draft-test2
+    success-url: http://docs-draft.example.org/{NOPE}/{build.parameters[BAD]}/publish-docs/
+
+- project:
+    name: org/docs
+    check:
+      jobs:
+        - docs-draft-test
+        - docs-draft-test2
diff --git a/tests/fixtures/config/success-url/git/org_docs/README b/tests/fixtures/config/success-url/git/org_docs/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/success-url/git/org_docs/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/success-url/main.yaml b/tests/fixtures/config/success-url/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/success-url/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-foo-test5.yaml b/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-foo-test5.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-foo-test5.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-test3.yaml b/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-test3.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-test3.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-test4.yaml b/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-test4.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/common-config/playbooks/layered-project-test4.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test2.yaml b/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test6.yaml b/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test6.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/common-config/playbooks/project-test6.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/templated-project/git/common-config/zuul.yaml b/tests/fixtures/config/templated-project/git/common-config/zuul.yaml
new file mode 100644
index 0000000..22a2d6d
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/common-config/zuul.yaml
@@ -0,0 +1,100 @@
+- pipeline:
+    name: check
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+- pipeline:
+    name: gate
+    manager: dependent
+    success-message: Build succeeded (gate).
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+- pipeline:
+    name: post
+    manager: independent
+    source:
+      gerrit
+    trigger:
+      gerrit:
+        - event: ref-updated
+          ref: ^(?!refs/).*$
+
+- project-template:
+    name: test-one-and-two
+    check:
+      jobs:
+        - project-test1
+        - project-test2
+
+- project-template:
+    name: test-three-and-four
+    check:
+       jobs:
+         - layered-project-test3
+         - layered-project-test4
+
+- project-template:
+    name: test-five
+    check:
+      jobs:
+         - layered-project-foo-test5
+
+- job:
+    name: project-test1
+
+- job:
+    name: project-test2
+
+- job:
+    name: layered-project-test3
+
+- job:
+    name: layered-project-test4
+
+- job:
+    name: layered-project-foo-test5
+
+- job:
+    name: project-test6
+
+- project:
+    name: org/templated-project
+    templates:
+      - test-one-and-two
+
+- project:
+    name: org/layered-project
+    templates:
+      - test-one-and-two
+      - test-three-and-four
+      - test-five
+    check:
+      jobs:
+        - project-test6
diff --git a/tests/fixtures/config/templated-project/git/org_layered-project/README b/tests/fixtures/config/templated-project/git/org_layered-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/org_layered-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/templated-project/git/org_templated-project/README b/tests/fixtures/config/templated-project/git/org_templated-project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/templated-project/git/org_templated-project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/templated-project/main.yaml b/tests/fixtures/config/templated-project/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/templated-project/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/playbooks/project-test2.yaml b/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/playbooks/project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/playbooks/project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/zuul.yaml b/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/zuul.yaml
new file mode 100644
index 0000000..302dfcf
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-multiple-gerrits/git/common-config/zuul.yaml
@@ -0,0 +1,42 @@
+- pipeline:
+    name: review_check
+    manager: independent
+    source: review_gerrit
+    trigger:
+      review_gerrit:
+        - event: patchset-created
+    success:
+      review_gerrit:
+        verified: 1
+    failure:
+      review_gerrit:
+        verified: -1
+
+- pipeline:
+    name: another_check
+    manager: independent
+    source: another_gerrit
+    trigger:
+      another_gerrit:
+        - event: patchset-created
+    success:
+      another_gerrit:
+        verified: 1
+    failure:
+      another_gerrit:
+        verified: -1
+
+- job:
+    name: project-test1
+
+- job:
+    name: project-test2
+
+- project:
+    name: org/project1
+    review_check:
+      jobs:
+        - project-test1
+    another_check:
+      jobs:
+        - project-test2
diff --git a/tests/fixtures/config/zuul-connections-multiple-gerrits/git/org_project1/README b/tests/fixtures/config/zuul-connections-multiple-gerrits/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-multiple-gerrits/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/zuul-connections-multiple-gerrits/main.yaml b/tests/fixtures/config/zuul-connections-multiple-gerrits/main.yaml
new file mode 100644
index 0000000..730cc7e
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-multiple-gerrits/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      review_gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/playbooks/project-test1.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/playbooks/project-test1.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/playbooks/project-test2.yaml b/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/playbooks/project-test2.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/playbooks/project-test2.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/zuul.yaml b/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/zuul.yaml
new file mode 100644
index 0000000..114a4a3
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-same-gerrit/git/common-config/zuul.yaml
@@ -0,0 +1,26 @@
+- pipeline:
+    name: check
+    manager: independent
+    source: review_gerrit
+    trigger:
+      review_gerrit:
+        - event: patchset-created
+    success:
+      review_gerrit:
+        verified: 1
+    failure:
+      alt_voting_gerrit:
+        verified: -1
+
+- job:
+    name: project-test1
+
+- job:
+    name: project-test2
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test1
+        - project-test2
diff --git a/tests/fixtures/config/zuul-connections-same-gerrit/git/org_project/README b/tests/fixtures/config/zuul-connections-same-gerrit/git/org_project/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-same-gerrit/git/org_project/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/zuul-connections-same-gerrit/main.yaml b/tests/fixtures/config/zuul-connections-same-gerrit/main.yaml
new file mode 100644
index 0000000..90297fb
--- /dev/null
+++ b/tests/fixtures/config/zuul-connections-same-gerrit/main.yaml
@@ -0,0 +1,8 @@
+- tenant:
+    name: tenant-one
+    source:
+      review_gerrit:
+        config-repos:
+          - common-config
+        project-repos:
+          - org/project
diff --git a/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/playbooks/project-check.yaml b/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/playbooks/project-check.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/playbooks/project-check.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/playbooks/project-gate.yaml b/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/playbooks/project-gate.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/playbooks/project-gate.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/layout-zuultrigger-enqueued.yaml b/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/zuul.yaml
similarity index 70%
rename from tests/fixtures/layout-zuultrigger-enqueued.yaml
rename to tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/zuul.yaml
index 8babd9e..8d63576 100644
--- a/tests/fixtures/layout-zuultrigger-enqueued.yaml
+++ b/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/common-config/zuul.yaml
@@ -1,6 +1,6 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
+- pipeline:
+    name: check
+    manager: independent
     source: gerrit
     require:
       approval:
@@ -18,9 +18,9 @@
       gerrit:
         verified: -1
 
-  - name: gate
-    manager: DependentPipelineManager
-    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
+- pipeline:
+    name: gate
+    manager: dependent
     source: gerrit
     require:
       approval:
@@ -45,9 +45,17 @@
         verified: 0
     precedence: high
 
-projects:
-  - name: org/project
+- job:
+    name: project-check
+
+- job:
+    name: project-gate
+
+- project:
+    name: org/project
     check:
-      - project-check
+      jobs:
+        - project-check
     gate:
-      - project-gate
+      jobs:
+        - project-gate
diff --git a/tests/cmd/__init__.py b/tests/fixtures/config/zuultrigger/parent-change-enqueued/git/org_project/README
similarity index 100%
copy from tests/cmd/__init__.py
copy to tests/fixtures/config/zuultrigger/parent-change-enqueued/git/org_project/README
diff --git a/tests/fixtures/config/zuultrigger/parent-change-enqueued/main.yaml b/tests/fixtures/config/zuultrigger/parent-change-enqueued/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/zuultrigger/parent-change-enqueued/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/playbooks/project-check.yaml b/tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/playbooks/project-check.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/playbooks/project-check.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/playbooks/project-gate.yaml b/tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/playbooks/project-gate.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/playbooks/project-gate.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/layout-zuultrigger-merged.yaml b/tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/zuul.yaml
similarity index 68%
rename from tests/fixtures/layout-zuultrigger-merged.yaml
rename to tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/zuul.yaml
index bb06dde..eb6bf1c 100644
--- a/tests/fixtures/layout-zuultrigger-merged.yaml
+++ b/tests/fixtures/config/zuultrigger/project-change-merged/git/common-config/zuul.yaml
@@ -1,6 +1,6 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
+- pipeline:
+    name: check
+    manager: independent
     source: gerrit
     trigger:
       gerrit:
@@ -12,8 +12,9 @@
       gerrit:
         verified: -1
 
-  - name: gate
-    manager: DependentPipelineManager
+- pipeline:
+    name: gate
+    manager: dependent
     failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
     source: gerrit
     trigger:
@@ -33,8 +34,9 @@
         verified: 0
     precedence: high
 
-  - name: merge-check
-    manager: IndependentPipelineManager
+- pipeline:
+    name: merge-check
+    manager: independent
     source: gerrit
     ignore-dependencies: true
     trigger:
@@ -44,11 +46,20 @@
       gerrit:
         verified: -1
 
-projects:
-  - name: org/project
+- job:
+    name: project-check
+
+- job:
+    name: project-gate
+
+- project:
+    name: org/project
     check:
-      - project-check
+      jobs:
+        - project-check
     gate:
-      - project-gate
+      jobs:
+        - project-gate
     merge-check:
-      - noop
+      jobs:
+        - noop
diff --git a/tests/cmd/__init__.py b/tests/fixtures/config/zuultrigger/project-change-merged/git/org_project/README
similarity index 100%
copy from tests/cmd/__init__.py
copy to tests/fixtures/config/zuultrigger/project-change-merged/git/org_project/README
diff --git a/tests/fixtures/config/zuultrigger/project-change-merged/main.yaml b/tests/fixtures/config/zuultrigger/project-change-merged/main.yaml
new file mode 100644
index 0000000..a22ed5c
--- /dev/null
+++ b/tests/fixtures/config/zuultrigger/project-change-merged/main.yaml
@@ -0,0 +1,6 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-repos:
+          - common-config
diff --git a/tests/fixtures/custom_functions.py b/tests/fixtures/custom_functions.py
deleted file mode 100644
index 4712052..0000000
--- a/tests/fixtures/custom_functions.py
+++ /dev/null
@@ -1,2 +0,0 @@
-def select_debian_node(item, params):
-    params['ZUUL_NODE'] = 'debian'
diff --git a/tests/fixtures/custom_functions_live_reconfiguration_functions.py b/tests/fixtures/custom_functions_live_reconfiguration_functions.py
deleted file mode 100644
index d8e06f4..0000000
--- a/tests/fixtures/custom_functions_live_reconfiguration_functions.py
+++ /dev/null
@@ -1,2 +0,0 @@
-def select_debian_node(item, params):
-    params['ZUUL_NODE'] = 'wheezy'
diff --git a/tests/fixtures/layout-abort-attempts.yaml b/tests/fixtures/layout-abort-attempts.yaml
deleted file mode 100644
index 86d9d78..0000000
--- a/tests/fixtures/layout-abort-attempts.yaml
+++ /dev/null
@@ -1,30 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: patchset-created
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: post
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: ref-updated
-          ref: ^(?!refs/).*$
-
-jobs:
-  - name: project-test1
-    attempts: 4
-
-projects:
-  - name: org/project
-    check:
-      - project-merge:
-        - project-test1
-        - project-test2
diff --git a/tests/fixtures/layout-bad-queue.yaml b/tests/fixtures/layout-bad-queue.yaml
deleted file mode 100644
index 3eb2051..0000000
--- a/tests/fixtures/layout-bad-queue.yaml
+++ /dev/null
@@ -1,74 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: patchset-created
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: post
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: ref-updated
-          ref: ^(?!refs/).*$
-
-  - name: gate
-    manager: DependentPipelineManager
-    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
-    trigger:
-      gerrit:
-        - event: comment-added
-          approval:
-            - approved: 1
-    success:
-      gerrit:
-        verified: 2
-        submit: true
-    failure:
-      gerrit:
-        verified: -2
-    start:
-      gerrit:
-        verified: 0
-    precedence: high
-
-jobs:
-  - name: project1-project2-integration
-    queue-name: integration
-  - name: project1-test1
-    queue-name: not_integration
-
-projects:
-  - name: org/project1
-    check:
-      - project1-merge:
-        - project1-test1
-        - project1-test2
-        - project1-project2-integration
-    gate:
-      - project1-merge:
-        - project1-test1
-        - project1-test2
-        - project1-project2-integration
-    post:
-      - project1-post
-
-  - name: org/project2
-    check:
-      - project2-merge:
-        - project2-test1
-        - project2-test2
-        - project1-project2-integration
-    gate:
-      - project2-merge:
-        - project2-test1
-        - project2-test2
-        - project1-project2-integration
-    post:
-      - project2-post
diff --git a/tests/fixtures/layout-connections-multiple-gerrits.yaml b/tests/fixtures/layout-connections-multiple-gerrits.yaml
deleted file mode 100644
index 029f42f..0000000
--- a/tests/fixtures/layout-connections-multiple-gerrits.yaml
+++ /dev/null
@@ -1,37 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    source: review_gerrit
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-    success:
-      review_gerrit:
-        VRFY: 1
-    failure:
-      review_gerrit:
-        VRFY: -1
-
-  - name: another_check
-    manager: IndependentPipelineManager
-    source: another_gerrit
-    trigger:
-      another_gerrit:
-        - event: patchset-created
-    success:
-      another_gerrit:
-        VRFY: 1
-    failure:
-      another_gerrit:
-        VRFY: -1
-
-projects:
-  - name: org/project
-    check:
-      - project-review-gerrit
-    another_check:
-      - project-another-gerrit
-
-  - name: org/project1
-    another_check:
-      - project1-another-gerrit
diff --git a/tests/fixtures/layout-dont-ignore-deletes.yaml b/tests/fixtures/layout-dont-ignore-deletes.yaml
deleted file mode 100644
index 1cf3c71..0000000
--- a/tests/fixtures/layout-dont-ignore-deletes.yaml
+++ /dev/null
@@ -1,16 +0,0 @@
-includes:
-  - python-file: custom_functions.py
-
-pipelines:
-  - name: post
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: ref-updated
-          ref: ^(?!refs/).*$
-          ignore-deletes: False
-
-projects:
-  - name: org/project
-    post:
-      - project-post
diff --git a/tests/fixtures/layout-footer-message.yaml b/tests/fixtures/layout-footer-message.yaml
deleted file mode 100644
index 7977c19..0000000
--- a/tests/fixtures/layout-footer-message.yaml
+++ /dev/null
@@ -1,34 +0,0 @@
-includes:
-  - python-file: custom_functions.py
-
-pipelines:
-  - name: gate
-    manager: DependentPipelineManager
-    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
-    footer-message: For CI problems and help debugging, contact ci@example.org
-    trigger:
-      gerrit:
-        - event: comment-added
-          approval:
-            - approved: 1
-    success:
-      gerrit:
-        verified: 2
-        submit: true
-      smtp:
-        to: success@example.org
-    failure:
-      gerrit:
-        verified: -2
-      smtp:
-        to: failure@example.org
-    start:
-      gerrit:
-        verified: 0
-    precedence: high
-
-projects:
-  - name: org/project
-    gate:
-      - test1
-      - test2
diff --git a/tests/fixtures/layout-idle.yaml b/tests/fixtures/layout-idle.yaml
deleted file mode 100644
index 0870788..0000000
--- a/tests/fixtures/layout-idle.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
-pipelines:
-  - name: periodic
-    manager: IndependentPipelineManager
-    trigger:
-      timer:
-        - time: '* * * * * */1'
-
-projects:
-  - name: org/project
-    periodic:
-      - project-bitrot-stable-old
-      - project-bitrot-stable-older
diff --git a/tests/fixtures/layout-live-reconfiguration-functions.yaml b/tests/fixtures/layout-live-reconfiguration-functions.yaml
index e261a88..b22b3ab 100644
--- a/tests/fixtures/layout-live-reconfiguration-functions.yaml
+++ b/tests/fixtures/layout-live-reconfiguration-functions.yaml
@@ -26,12 +26,3 @@
   - name: ^.*-merge$
     failure-message: Unable to merge change
     hold-following-changes: true
-  - name: node-project-test1
-    parameter-function: select_debian_node
-
-projects:
-  - name: org/node-project
-    gate:
-      - node-project-merge:
-        - node-project-test1
-        - node-project-test2
diff --git a/tests/fixtures/layout-mutex.yaml b/tests/fixtures/layout-mutex.yaml
deleted file mode 100644
index fcd0529..0000000
--- a/tests/fixtures/layout-mutex.yaml
+++ /dev/null
@@ -1,25 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: patchset-created
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-jobs:
-  - name: mutex-one
-    mutex: test-mutex
-  - name: mutex-two
-    mutex: test-mutex
-
-projects:
-  - name: org/project
-    check:
-      - project-test1
-      - mutex-one
-      - mutex-two
diff --git a/tests/fixtures/layout-no-timer.yaml b/tests/fixtures/layout-no-timer.yaml
deleted file mode 100644
index ca40d13..0000000
--- a/tests/fixtures/layout-no-timer.yaml
+++ /dev/null
@@ -1,28 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: patchset-created
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: periodic
-    manager: IndependentPipelineManager
-    # Trigger is required, set it to one that is a noop
-    # during tests that check the timer trigger.
-    trigger:
-      gerrit:
-        - event: ref-updated
-
-projects:
-  - name: org/project
-    check:
-      - project-test1
-    periodic:
-      - project-bitrot-stable-old
-      - project-bitrot-stable-older
diff --git a/tests/fixtures/layout-repo-deleted.yaml b/tests/fixtures/layout-repo-deleted.yaml
deleted file mode 100644
index 967009a..0000000
--- a/tests/fixtures/layout-repo-deleted.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: patchset-created
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: post
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: ref-updated
-          ref: ^(?!refs/).*$
-
-  - name: gate
-    manager: DependentPipelineManager
-    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
-    trigger:
-      gerrit:
-        - event: comment-added
-          approval:
-            - approved: 1
-    success:
-      gerrit:
-        verified: 2
-        submit: true
-    failure:
-      gerrit:
-        verified: -2
-    start:
-      gerrit:
-        verified: 0
-    precedence: high
-
-projects:
-  - name: org/delete-project
-    check:
-      - project-merge:
-        - project-test1
-        - project-test2
-    gate:
-      - project-merge:
-        - project-test1
-        - project-test2
-    post:
-      - project-post
diff --git a/tests/fixtures/layout-requirement-current-patchset.yaml b/tests/fixtures/layout-requirement-current-patchset.yaml
deleted file mode 100644
index 405077e..0000000
--- a/tests/fixtures/layout-requirement-current-patchset.yaml
+++ /dev/null
@@ -1,20 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    require:
-      current-patchset: True
-    trigger:
-      gerrit:
-        - event: patchset-created
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project
-    check:
-      - project-check
diff --git a/tests/fixtures/layout-requirement-email.yaml b/tests/fixtures/layout-requirement-email.yaml
deleted file mode 100644
index 4bfb733..0000000
--- a/tests/fixtures/layout-requirement-email.yaml
+++ /dev/null
@@ -1,37 +0,0 @@
-pipelines:
-  - name: pipeline
-    manager: IndependentPipelineManager
-    require:
-      approval:
-        - email: jenkins@example.com
-    trigger:
-      gerrit:
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: trigger
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: comment-added
-          require-approval:
-            - email: jenkins@example.com
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project1
-    pipeline:
-      - project1-pipeline
-  - name: org/project2
-    trigger:
-      - project2-trigger
diff --git a/tests/fixtures/layout-requirement-newer-than.yaml b/tests/fixtures/layout-requirement-newer-than.yaml
deleted file mode 100644
index b6beb35..0000000
--- a/tests/fixtures/layout-requirement-newer-than.yaml
+++ /dev/null
@@ -1,39 +0,0 @@
-pipelines:
-  - name: pipeline
-    manager: IndependentPipelineManager
-    require:
-      approval:
-        - username: jenkins
-          newer-than: 48h
-    trigger:
-      gerrit:
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: trigger
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: comment-added
-          require-approval:
-            - username: jenkins
-              newer-than: 48h
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project1
-    pipeline:
-      - project1-pipeline
-  - name: org/project2
-    trigger:
-      - project2-trigger
diff --git a/tests/fixtures/layout-requirement-older-than.yaml b/tests/fixtures/layout-requirement-older-than.yaml
deleted file mode 100644
index 2edf9df..0000000
--- a/tests/fixtures/layout-requirement-older-than.yaml
+++ /dev/null
@@ -1,39 +0,0 @@
-pipelines:
-  - name: pipeline
-    manager: IndependentPipelineManager
-    require:
-      approval:
-        - username: jenkins
-          older-than: 48h
-    trigger:
-      gerrit:
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: trigger
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: comment-added
-          require-approval:
-            - username: jenkins
-              older-than: 48h
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project1
-    pipeline:
-      - project1-pipeline
-  - name: org/project2
-    trigger:
-      - project2-trigger
diff --git a/tests/fixtures/layout-requirement-open.yaml b/tests/fixtures/layout-requirement-open.yaml
deleted file mode 100644
index e62719d..0000000
--- a/tests/fixtures/layout-requirement-open.yaml
+++ /dev/null
@@ -1,20 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    require:
-      open: True
-    trigger:
-      gerrit:
-        - event: patchset-created
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project
-    check:
-      - project-check
diff --git a/tests/fixtures/layout-requirement-reject-username.yaml b/tests/fixtures/layout-requirement-reject-username.yaml
deleted file mode 100644
index 9c71045..0000000
--- a/tests/fixtures/layout-requirement-reject-username.yaml
+++ /dev/null
@@ -1,37 +0,0 @@
-pipelines:
-  - name: pipeline
-    manager: IndependentPipelineManager
-    reject:
-      approval:
-        - username: 'jenkins'
-    trigger:
-      gerrit:
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: trigger
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: comment-added
-          reject-approval:
-            - username: 'jenkins'
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project1
-    pipeline:
-      - project1-pipeline
-  - name: org/project2
-    trigger:
-      - project2-trigger
\ No newline at end of file
diff --git a/tests/fixtures/layout-requirement-status.yaml b/tests/fixtures/layout-requirement-status.yaml
deleted file mode 100644
index af33468..0000000
--- a/tests/fixtures/layout-requirement-status.yaml
+++ /dev/null
@@ -1,20 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    require:
-      status: NEW
-    trigger:
-      gerrit:
-        - event: patchset-created
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project
-    check:
-      - project-check
diff --git a/tests/fixtures/layout-requirement-username.yaml b/tests/fixtures/layout-requirement-username.yaml
deleted file mode 100644
index f9e6477..0000000
--- a/tests/fixtures/layout-requirement-username.yaml
+++ /dev/null
@@ -1,37 +0,0 @@
-pipelines:
-  - name: pipeline
-    manager: IndependentPipelineManager
-    require:
-      approval:
-        - username: ^(jenkins|zuul)$
-    trigger:
-      gerrit:
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: trigger
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: comment-added
-          require-approval:
-            - username: jenkins
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project1
-    pipeline:
-      - project1-pipeline
-  - name: org/project2
-    trigger:
-      - project2-trigger
diff --git a/tests/fixtures/layout-requirement-vote1.yaml b/tests/fixtures/layout-requirement-vote1.yaml
deleted file mode 100644
index 7ccadff..0000000
--- a/tests/fixtures/layout-requirement-vote1.yaml
+++ /dev/null
@@ -1,39 +0,0 @@
-pipelines:
-  - name: pipeline
-    manager: IndependentPipelineManager
-    require:
-      approval:
-        - username: jenkins
-          verified: 1
-    trigger:
-      gerrit:
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: trigger
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: comment-added
-          require-approval:
-            - username: jenkins
-              verified: 1
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project1
-    pipeline:
-      - project1-pipeline
-  - name: org/project2
-    trigger:
-      - project2-trigger
diff --git a/tests/fixtures/layout-requirement-vote2.yaml b/tests/fixtures/layout-requirement-vote2.yaml
deleted file mode 100644
index 33d84d1..0000000
--- a/tests/fixtures/layout-requirement-vote2.yaml
+++ /dev/null
@@ -1,39 +0,0 @@
-pipelines:
-  - name: pipeline
-    manager: IndependentPipelineManager
-    require:
-      approval:
-        - username: jenkins
-          verified: [1, 2]
-    trigger:
-      gerrit:
-        - event: comment-added
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: trigger
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: comment-added
-          require-approval:
-            - username: jenkins
-              verified: [1, 2]
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-projects:
-  - name: org/project1
-    pipeline:
-      - project1-pipeline
-  - name: org/project2
-    trigger:
-      - project2-trigger
diff --git a/tests/fixtures/layout-skip-if.yaml b/tests/fixtures/layout-skip-if.yaml
deleted file mode 100644
index 0cfb445..0000000
--- a/tests/fixtures/layout-skip-if.yaml
+++ /dev/null
@@ -1,29 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: patchset-created
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-
-jobs:
-  # Defining a metajob will validate that the skip-if attribute of the
-  # metajob is correctly copied to the job.
-  - name: ^.*skip-if$
-    skip-if:
-      - project: ^org/project$
-        branch: ^master$
-        all-files-match-any:
-          - ^README$
-  - name: project-test-skip-if
-
-projects:
-  - name: org/project
-    check:
-      - project-test-skip-if
diff --git a/tests/fixtures/layout-timer-smtp.yaml b/tests/fixtures/layout-timer-smtp.yaml
deleted file mode 100644
index b5a6ce0..0000000
--- a/tests/fixtures/layout-timer-smtp.yaml
+++ /dev/null
@@ -1,23 +0,0 @@
-pipelines:
-  - name: periodic
-    manager: IndependentPipelineManager
-    trigger:
-      timer:
-        - time: '* * * * * */1'
-    success:
-      smtp:
-        to: alternative_me@example.com
-        from: zuul_from@example.com
-        subject: 'Periodic check for {change.project} succeeded'
-
-jobs:
-  - name: project-bitrot-stable-old
-    success-pattern: http://logs.example.com/{job.name}/{build.number}
-  - name: project-bitrot-stable-older
-    success-pattern: http://logs.example.com/{job.name}/{build.number}
-
-projects:
-  - name: org/project
-    periodic:
-      - project-bitrot-stable-old
-      - project-bitrot-stable-older
diff --git a/tests/fixtures/layout-timer.yaml b/tests/fixtures/layout-timer.yaml
deleted file mode 100644
index 4904f87..0000000
--- a/tests/fixtures/layout-timer.yaml
+++ /dev/null
@@ -1,28 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: patchset-created
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: periodic
-    manager: IndependentPipelineManager
-    trigger:
-      timer:
-        - time: '* * * * * */1'
-
-projects:
-  - name: org/project
-    check:
-      - project-merge:
-        - project-test1
-        - project-test2
-    periodic:
-      - project-bitrot-stable-old
-      - project-bitrot-stable-older
diff --git a/tests/fixtures/layout.yaml b/tests/fixtures/layout.yaml
index 2e48ff1..6131de0 100644
--- a/tests/fixtures/layout.yaml
+++ b/tests/fixtures/layout.yaml
@@ -3,7 +3,9 @@
 
 pipelines:
   - name: check
-    manager: IndependentPipelineManager
+    manager: independent
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: patchset-created
@@ -15,15 +17,19 @@
         verified: -1
 
   - name: post
-    manager: IndependentPipelineManager
+    manager: independent
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: ref-updated
           ref: ^(?!refs/).*$
 
   - name: gate
-    manager: DependentPipelineManager
+    manager: dependent
     failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: comment-added
@@ -42,8 +48,10 @@
     precedence: high
 
   - name: unused
-    manager: IndependentPipelineManager
+    manager: independent
     dequeue-on-new-patchset: false
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: comment-added
@@ -51,7 +59,9 @@
             - approved: 1
 
   - name: dup1
-    manager: IndependentPipelineManager
+    manager: independent
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: change-restored
@@ -63,7 +73,9 @@
         verified: -1
 
   - name: dup2
-    manager: IndependentPipelineManager
+    manager: independent
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: change-restored
@@ -75,8 +87,10 @@
         verified: -1
 
   - name: conflict
-    manager: DependentPipelineManager
+    manager: dependent
     failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: comment-added
@@ -94,7 +108,9 @@
         verified: 0
 
   - name: experimental
-    manager: IndependentPipelineManager
+    manager: independent
+    source:
+      gerrit
     trigger:
       gerrit:
         - event: patchset-created
@@ -113,8 +129,6 @@
   - name: project-testfile
     files:
       - '.*-requires'
-  - name: node-project-test1
-    parameter-function: select_debian_node
   - name: project1-project2-integration
     queue-name: integration
   - name: mutex-one
@@ -126,19 +140,6 @@
       - project1
       - extratag
 
-project-templates:
-  - name: test-one-and-two
-    check:
-     - '{projectname}-test1'
-     - '{projectname}-test2'
-  - name: test-three-and-four
-    check:
-     - '{name}-test3'
-     - '{name}-test4'
-  - name: test-five
-    check:
-     - '{name}-{something}-test5'
-
 projects:
   - name: org/project
     merge-mode: cherry-pick
@@ -201,14 +202,6 @@
     post:
       - project3-post
 
-  - name: org/one-job-project
-    check:
-      - one-job-project-merge
-    gate:
-      - one-job-project-merge
-    post:
-      - one-job-project-post
-
   - name: org/nonvoting-project
     check:
       - nonvoting-project-merge:
@@ -221,27 +214,6 @@
     post:
       - nonvoting-project-post
 
-  - name: org/templated-project
-    template:
-      - name: test-one-and-two
-        projectname: project
-
-  - name: org/layered-project
-    template:
-      - name: test-one-and-two
-        projectname: project
-      - name: test-three-and-four
-      - name: test-five
-        something: foo
-    check:
-      - project-test6
-
-  - name: org/node-project
-    gate:
-      - node-project-merge:
-        - node-project-test1
-        - node-project-test2
-
   - name: org/conflict-project
     conflict:
       - conflict-project-merge:
diff --git a/tests/fixtures/zuul-connections-multiple-gerrits.conf b/tests/fixtures/zuul-connections-multiple-gerrits.conf
index f067e6e..3e6850d 100644
--- a/tests/fixtures/zuul-connections-multiple-gerrits.conf
+++ b/tests/fixtures/zuul-connections-multiple-gerrits.conf
@@ -2,16 +2,19 @@
 server=127.0.0.1
 
 [zuul]
-layout_config=layout-connections-multiple-voters.yaml
+tenant_config=main.yaml
 url_pattern=http://logs.example.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}/{build.number}
 job_name_in_report=true
 
 [merger]
-git_dir=/tmp/zuul-test/git
+git_dir=/tmp/zuul-test/merger-git
 git_user_email=zuul@example.com
 git_user_name=zuul
 zuul_url=http://zuul.example.com/p
 
+[launcher]
+git_dir=/tmp/zuul-test/launcher-git
+
 [swift]
 authurl=https://identity.api.example.org/v2.0/
 user=username
diff --git a/tests/fixtures/zuul-connections-same-gerrit.conf b/tests/fixtures/zuul-connections-same-gerrit.conf
index 2609d30..30564de 100644
--- a/tests/fixtures/zuul-connections-same-gerrit.conf
+++ b/tests/fixtures/zuul-connections-same-gerrit.conf
@@ -2,16 +2,19 @@
 server=127.0.0.1
 
 [zuul]
-layout_config=layout-connections-multiple-voters.yaml
+tenant_config=config/zuul-connections-same-gerrit/main.yaml
 url_pattern=http://logs.example.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}/{build.number}
 job_name_in_report=true
 
 [merger]
-git_dir=/tmp/zuul-test/git
+git_dir=/tmp/zuul-test/merger-git
 git_user_email=zuul@example.com
 git_user_name=zuul
 zuul_url=http://zuul.example.com/p
 
+[launcher]
+git_dir=/tmp/zuul-test/launcher-git
+
 [swift]
 authurl=https://identity.api.example.org/v2.0/
 user=username
@@ -41,10 +44,11 @@
 default_from=zuul@example.com
 default_to=you@example.com
 
-[connection resultsdb]
-driver=sql
-dburi=$MYSQL_FIXTURE_DBURI$
+# TODOv3(jeblair): commented out until sqlalchemy conenction ported to
+# v3 driver syntax
+#[connection resultsdb] driver=sql
+#dburi=$MYSQL_FIXTURE_DBURI$
 
-[connection resultsdb_failures]
-driver=sql
-dburi=$MYSQL_FIXTURE_DBURI$
+#[connection resultsdb_failures]
+#driver=sql
+#dburi=$MYSQL_FIXTURE_DBURI$
diff --git a/tests/fixtures/zuul-git-driver.conf b/tests/fixtures/zuul-git-driver.conf
new file mode 100644
index 0000000..868e272
--- /dev/null
+++ b/tests/fixtures/zuul-git-driver.conf
@@ -0,0 +1,43 @@
+[gearman]
+server=127.0.0.1
+
+[zuul]
+tenant_config=config/zuul-connections-same-gerrit/main.yaml
+url_pattern=http://logs.example.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}/{build.number}
+job_name_in_report=true
+
+[merger]
+git_dir=/tmp/zuul-test/git
+git_user_email=zuul@example.com
+git_user_name=zuul
+zuul_url=http://zuul.example.com/p
+
+[launcher]
+git_dir=/tmp/zuul-test/launcher-git
+
+[swift]
+authurl=https://identity.api.example.org/v2.0/
+user=username
+key=password
+tenant_name=" "
+
+default_container=logs
+region_name=EXP
+logserver_prefix=http://logs.example.org/server.app/
+
+[connection gerrit]
+driver=gerrit
+server=review.example.com
+user=jenkins
+sshkey=none
+
+[connection git]
+driver=git
+baseurl=""
+
+[connection outgoing_smtp]
+driver=smtp
+server=localhost
+port=25
+default_from=zuul@example.com
+default_to=you@example.com
diff --git a/tests/fixtures/zuul.conf b/tests/fixtures/zuul.conf
index 0956cc4..f0b6068 100644
--- a/tests/fixtures/zuul.conf
+++ b/tests/fixtures/zuul.conf
@@ -2,16 +2,19 @@
 server=127.0.0.1
 
 [zuul]
-layout_config=layout.yaml
+tenant_config=main.yaml
 url_pattern=http://logs.example.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}/{build.number}
 job_name_in_report=true
 
 [merger]
-git_dir=/tmp/zuul-test/git
+git_dir=/tmp/zuul-test/merger-git
 git_user_email=zuul@example.com
 git_user_name=zuul
 zuul_url=http://zuul.example.com/p
 
+[launcher]
+git_dir=/tmp/zuul-test/launcher-git
+
 [swift]
 authurl=https://identity.api.example.org/v2.0/
 user=username
diff --git a/tests/make_playbooks.py b/tests/make_playbooks.py
new file mode 100755
index 0000000..12d9e71
--- /dev/null
+++ b/tests/make_playbooks.py
@@ -0,0 +1,65 @@
+#!/usr/bin/env python
+
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+
+import yaml
+
+FIXTURE_DIR = os.path.join(os.path.dirname(__file__),
+                           'fixtures')
+CONFIG_DIR = os.path.join(FIXTURE_DIR, 'config')
+
+
+def make_playbook(path):
+    d = os.path.dirname(path)
+    try:
+        os.makedirs(d)
+    except OSError:
+        pass
+    with open(path, 'w') as f:
+        f.write('- hosts: all\n')
+        f.write('  tasks: []\n')
+
+
+def handle_repo(path):
+    print('Repo: %s' % path)
+    config_path = None
+    for fn in ['zuul.yaml', '.zuul.yaml']:
+        if os.path.exists(os.path.join(path, fn)):
+            config_path = os.path.join(path, fn)
+            break
+    config = yaml.load(open(config_path))
+    for block in config:
+        if 'job' not in block:
+            continue
+        job = block['job']['name']
+        playbook = os.path.join(path, 'playbooks', job + '.yaml')
+        if not os.path.exists(playbook):
+            print('  Creating: %s' % job)
+            make_playbook(playbook)
+
+
+def main():
+    repo_dirs = []
+
+    for root, dirs, files in os.walk(CONFIG_DIR):
+        if 'zuul.yaml' in files or '.zuul.yaml' in files:
+            repo_dirs.append(root)
+
+    for path in repo_dirs:
+        handle_repo(path)
+
+
+if __name__ == '__main__':
+    main()
diff --git a/tests/cmd/__init__.py b/tests/nodepool/__init__.py
similarity index 100%
copy from tests/cmd/__init__.py
copy to tests/nodepool/__init__.py
diff --git a/tests/nodepool/test_nodepool_integration.py b/tests/nodepool/test_nodepool_integration.py
new file mode 100644
index 0000000..67968a3
--- /dev/null
+++ b/tests/nodepool/test_nodepool_integration.py
@@ -0,0 +1,126 @@
+# Copyright 2017 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+import time
+from unittest import skip
+
+import zuul.zk
+import zuul.nodepool
+from zuul import model
+
+from tests.base import BaseTestCase
+
+
+class TestNodepoolIntegration(BaseTestCase):
+    # Tests the Nodepool interface class using a *real* nodepool and
+    # fake scheduler.
+
+    def setUp(self):
+        super(BaseTestCase, self).setUp()
+
+        self.zk_config = zuul.zk.ZooKeeperConnectionConfig('localhost')
+        self.zk = zuul.zk.ZooKeeper()
+        self.zk.connect([self.zk_config])
+
+        self.provisioned_requests = []
+        # This class implements the scheduler methods zuul.nodepool
+        # needs, so we pass 'self' as the scheduler.
+        self.nodepool = zuul.nodepool.Nodepool(self)
+
+    def waitForRequests(self):
+        # Wait until all requests are complete.
+        while self.nodepool.requests:
+            time.sleep(0.1)
+
+    def onNodesProvisioned(self, request):
+        # This is a scheduler method that the nodepool class calls
+        # back when a request is provisioned.
+        self.provisioned_requests.append(request)
+
+    def test_node_request(self):
+        # Test a simple node request
+
+        nodeset = model.NodeSet()
+        nodeset.addNode(model.Node('controller', 'fake-label'))
+        job = model.Job('testjob')
+        job.nodeset = nodeset
+        request = self.nodepool.requestNodes(None, job)
+        self.waitForRequests()
+        self.assertEqual(len(self.provisioned_requests), 1)
+        self.assertEqual(request.state, model.STATE_FULFILLED)
+
+        # Accept the nodes
+        self.nodepool.acceptNodes(request)
+        nodeset = request.nodeset
+
+        for node in nodeset.getNodes():
+            self.assertIsNotNone(node.lock)
+            self.assertEqual(node.state, model.STATE_READY)
+
+        # Mark the nodes in use
+        self.nodepool.useNodeSet(nodeset)
+        for node in nodeset.getNodes():
+            self.assertEqual(node.state, model.STATE_IN_USE)
+
+        # Return the nodes
+        self.nodepool.returnNodeSet(nodeset)
+        for node in nodeset.getNodes():
+            self.assertIsNone(node.lock)
+            self.assertEqual(node.state, model.STATE_USED)
+
+    def test_invalid_node_request(self):
+        # Test requests with an invalid node type fail
+        nodeset = model.NodeSet()
+        nodeset.addNode(model.Node('controller', 'invalid-label'))
+        job = model.Job('testjob')
+        job.nodeset = nodeset
+        request = self.nodepool.requestNodes(None, job)
+        self.waitForRequests()
+        self.assertEqual(len(self.provisioned_requests), 1)
+        self.assertEqual(request.state, model.STATE_FAILED)
+
+    @skip("Disabled until nodepool is ready")
+    def test_node_request_disconnect(self):
+        # Test that node requests are re-submitted after disconnect
+
+        nodeset = model.NodeSet()
+        nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
+        nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+        job = model.Job('testjob')
+        job.nodeset = nodeset
+        self.fake_nodepool.paused = True
+        request = self.nodepool.requestNodes(None, job)
+        self.zk.client.stop()
+        self.zk.client.start()
+        self.fake_nodepool.paused = False
+        self.waitForRequests()
+        self.assertEqual(len(self.provisioned_requests), 1)
+        self.assertEqual(request.state, model.STATE_FULFILLED)
+
+    @skip("Disabled until nodepool is ready")
+    def test_node_request_canceled(self):
+        # Test that node requests can be canceled
+
+        nodeset = model.NodeSet()
+        nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
+        nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+        job = model.Job('testjob')
+        job.nodeset = nodeset
+        self.fake_nodepool.paused = True
+        request = self.nodepool.requestNodes(None, job)
+        self.nodepool.cancelRequest(request)
+
+        self.waitForRequests()
+        self.assertEqual(len(self.provisioned_requests), 0)
diff --git a/tests/print_layout.py b/tests/print_layout.py
new file mode 100644
index 0000000..a295886
--- /dev/null
+++ b/tests/print_layout.py
@@ -0,0 +1,65 @@
+#!/usr/bin/env python
+
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import argparse
+import os
+import sys
+
+FIXTURE_DIR = os.path.join(os.path.dirname(__file__),
+                           'fixtures')
+CONFIG_DIR = os.path.join(FIXTURE_DIR, 'config')
+
+
+def print_file(title, path):
+    print('')
+    print(title)
+    print('-' * 78)
+    with open(path) as f:
+        print(f.read())
+    print('-' * 78)
+
+
+def main():
+    parser = argparse.ArgumentParser(description='Print test layout.')
+    parser.add_argument(dest='config', nargs='?',
+                        help='the test configuration name')
+    args = parser.parse_args()
+    if not args.config:
+        print('Available test configurations:')
+        for d in os.listdir(CONFIG_DIR):
+            print('  ' + d)
+        sys.exit(1)
+    configdir = os.path.join(CONFIG_DIR, args.config)
+
+    title = '   Configuration: %s   ' % args.config
+    print('=' * len(title))
+    print(title)
+    print('=' * len(title))
+    print_file('Main Configuration',
+               os.path.join(configdir, 'main.yaml'))
+
+    gitroot = os.path.join(configdir, 'git')
+    for gitrepo in os.listdir(gitroot):
+        reporoot = os.path.join(gitroot, gitrepo)
+        print('')
+        print('=== Git repo: %s ===' % gitrepo)
+        filenames = os.listdir(reporoot)
+        for fn in filenames:
+            if fn in ['zuul.yaml', '.zuul.yaml']:
+                print_file('File: ' + os.path.join(gitrepo, fn),
+                           os.path.join(reporoot, fn))
+
+
+if __name__ == '__main__':
+    main()
diff --git a/tests/test_model.py b/tests/test_model.py
deleted file mode 100644
index 6ad0750..0000000
--- a/tests/test_model.py
+++ /dev/null
@@ -1,142 +0,0 @@
-# Copyright 2015 Red Hat, Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-import random
-
-import fixtures
-
-from zuul import change_matcher as cm
-from zuul import model
-
-from tests.base import BaseTestCase
-
-
-class TestJob(BaseTestCase):
-
-    @property
-    def job(self):
-        job = model.Job('job')
-        job.skip_if_matcher = cm.MatchAll([
-            cm.ProjectMatcher('^project$'),
-            cm.MatchAllFiles([cm.FileMatcher('^docs/.*$')]),
-        ])
-        return job
-
-    def test_change_matches_returns_false_for_matched_skip_if(self):
-        change = model.Change('project')
-        change.files = ['/COMMIT_MSG', 'docs/foo']
-        self.assertFalse(self.job.changeMatches(change))
-
-    def test_change_matches_returns_true_for_unmatched_skip_if(self):
-        change = model.Change('project')
-        change.files = ['/COMMIT_MSG', 'foo']
-        self.assertTrue(self.job.changeMatches(change))
-
-    def test_copy_retains_skip_if(self):
-        job = model.Job('job')
-        job.copy(self.job)
-        self.assertTrue(job.skip_if_matcher)
-
-    def _assert_job_booleans_are_not_none(self, job):
-        self.assertIsNotNone(job.voting)
-        self.assertIsNotNone(job.hold_following_changes)
-
-    def test_job_sets_defaults_for_boolean_attributes(self):
-        job = model.Job('job')
-        self._assert_job_booleans_are_not_none(job)
-
-    def test_metajob_does_not_set_defaults_for_boolean_attributes(self):
-        job = model.Job('^job')
-        self.assertIsNone(job.voting)
-        self.assertIsNone(job.hold_following_changes)
-
-    def test_metajob_copy_does_not_set_undefined_boolean_attributes(self):
-        job = model.Job('job')
-        metajob = model.Job('^job')
-        job.copy(metajob)
-        self._assert_job_booleans_are_not_none(job)
-
-
-class TestJobTimeData(BaseTestCase):
-    def setUp(self):
-        super(TestJobTimeData, self).setUp()
-        self.tmp_root = self.useFixture(fixtures.TempDir(
-            rootdir=os.environ.get("ZUUL_TEST_ROOT"))
-        ).path
-
-    def test_empty_timedata(self):
-        path = os.path.join(self.tmp_root, 'job-name')
-        self.assertFalse(os.path.exists(path))
-        self.assertFalse(os.path.exists(path + '.tmp'))
-        td = model.JobTimeData(path)
-        self.assertEqual(td.success_times, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
-        self.assertEqual(td.failure_times, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
-        self.assertEqual(td.results, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
-
-    def test_save_reload(self):
-        path = os.path.join(self.tmp_root, 'job-name')
-        self.assertFalse(os.path.exists(path))
-        self.assertFalse(os.path.exists(path + '.tmp'))
-        td = model.JobTimeData(path)
-        self.assertEqual(td.success_times, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
-        self.assertEqual(td.failure_times, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
-        self.assertEqual(td.results, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
-        success_times = []
-        failure_times = []
-        results = []
-        for x in range(10):
-            success_times.append(int(random.random() * 1000))
-            failure_times.append(int(random.random() * 1000))
-            results.append(0)
-            results.append(1)
-        random.shuffle(results)
-        s = f = 0
-        for result in results:
-            if result:
-                td.add(failure_times[f], 'FAILURE')
-                f += 1
-            else:
-                td.add(success_times[s], 'SUCCESS')
-                s += 1
-        self.assertEqual(td.success_times, success_times)
-        self.assertEqual(td.failure_times, failure_times)
-        self.assertEqual(td.results, results[10:])
-        td.save()
-        self.assertTrue(os.path.exists(path))
-        self.assertFalse(os.path.exists(path + '.tmp'))
-        td = model.JobTimeData(path)
-        td.load()
-        self.assertEqual(td.success_times, success_times)
-        self.assertEqual(td.failure_times, failure_times)
-        self.assertEqual(td.results, results[10:])
-
-
-class TestTimeDataBase(BaseTestCase):
-    def setUp(self):
-        super(TestTimeDataBase, self).setUp()
-        self.tmp_root = self.useFixture(fixtures.TempDir(
-            rootdir=os.environ.get("ZUUL_TEST_ROOT"))
-        ).path
-        self.db = model.TimeDataBase(self.tmp_root)
-
-    def test_timedatabase(self):
-        self.assertEqual(self.db.getEstimatedTime('job-name'), 0)
-        self.db.update('job-name', 50, 'SUCCESS')
-        self.assertEqual(self.db.getEstimatedTime('job-name'), 50)
-        self.db.update('job-name', 100, 'SUCCESS')
-        self.assertEqual(self.db.getEstimatedTime('job-name'), 75)
-        for x in range(10):
-            self.db.update('job-name', 100, 'SUCCESS')
-        self.assertEqual(self.db.getEstimatedTime('job-name'), 100)
diff --git a/tests/test_reporter.py b/tests/test_reporter.py
deleted file mode 100644
index 6a179d2..0000000
--- a/tests/test_reporter.py
+++ /dev/null
@@ -1,64 +0,0 @@
-# Copyright 2014 Rackspace Australia
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import fixtures
-import logging
-import testtools
-
-import zuul.reporter.gerrit
-import zuul.reporter.smtp
-import zuul.reporter.sql
-
-
-class TestSMTPReporter(testtools.TestCase):
-    log = logging.getLogger("zuul.test_reporter")
-
-    def test_reporter_abc(self):
-        # We only need to instantiate a class for this
-        reporter = zuul.reporter.smtp.SMTPReporter({})  # noqa
-
-    def test_reporter_name(self):
-        self.assertEqual('smtp', zuul.reporter.smtp.SMTPReporter.name)
-
-
-class TestGerritReporter(testtools.TestCase):
-    log = logging.getLogger("zuul.test_reporter")
-
-    def test_reporter_abc(self):
-        # We only need to instantiate a class for this
-        reporter = zuul.reporter.gerrit.GerritReporter(None)  # noqa
-
-    def test_reporter_name(self):
-        self.assertEqual('gerrit', zuul.reporter.gerrit.GerritReporter.name)
-
-
-class TestSQLReporter(testtools.TestCase):
-    log = logging.getLogger("zuul.test_reporter")
-
-    def test_reporter_abc(self):
-        # We only need to instantiate a class for this
-        # First mock out _setup_tables
-        def _fake_setup_tables(self):
-            pass
-
-        self.useFixture(fixtures.MonkeyPatch(
-            'zuul.reporter.sql.SQLReporter._setup_tables',
-            _fake_setup_tables
-        ))
-
-        reporter = zuul.reporter.sql.SQLReporter()  # noqa
-
-    def test_reporter_name(self):
-        self.assertEqual(
-            'sql', zuul.reporter.sql.SQLReporter.name)
diff --git a/tests/test_source.py b/tests/test_source.py
deleted file mode 100644
index 8a3e7d5..0000000
--- a/tests/test_source.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright 2014 Rackspace Australia
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import logging
-import testtools
-
-import zuul.source
-
-
-class TestGerritSource(testtools.TestCase):
-    log = logging.getLogger("zuul.test_source")
-
-    def test_source_name(self):
-        self.assertEqual('gerrit', zuul.source.gerrit.GerritSource.name)
diff --git a/tests/test_trigger.py b/tests/test_trigger.py
deleted file mode 100644
index 7eb1b69..0000000
--- a/tests/test_trigger.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright 2014 Rackspace Australia
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import logging
-import testtools
-
-import zuul.trigger
-
-
-class TestGerritTrigger(testtools.TestCase):
-    log = logging.getLogger("zuul.test_trigger")
-
-    def test_trigger_abc(self):
-        # We only need to instantiate a class for this
-        zuul.trigger.gerrit.GerritTrigger({})
-
-    def test_trigger_name(self):
-        self.assertEqual('gerrit', zuul.trigger.gerrit.GerritTrigger.name)
-
-
-class TestTimerTrigger(testtools.TestCase):
-    log = logging.getLogger("zuul.test_trigger")
-
-    def test_trigger_abc(self):
-        # We only need to instantiate a class for this
-        zuul.trigger.timer.TimerTrigger({})
-
-    def test_trigger_name(self):
-        self.assertEqual('timer', zuul.trigger.timer.TimerTrigger.name)
-
-
-class TestZuulTrigger(testtools.TestCase):
-    log = logging.getLogger("zuul.test_trigger")
-
-    def test_trigger_abc(self):
-        # We only need to instantiate a class for this
-        zuul.trigger.zuultrigger.ZuulTrigger({})
-
-    def test_trigger_name(self):
-        self.assertEqual('zuul', zuul.trigger.zuultrigger.ZuulTrigger.name)
diff --git a/tests/cmd/__init__.py b/tests/unit/__init__.py
similarity index 100%
rename from tests/cmd/__init__.py
rename to tests/unit/__init__.py
diff --git a/tests/test_change_matcher.py b/tests/unit/test_change_matcher.py
similarity index 100%
rename from tests/test_change_matcher.py
rename to tests/unit/test_change_matcher.py
diff --git a/tests/test_clonemapper.py b/tests/unit/test_clonemapper.py
similarity index 94%
rename from tests/test_clonemapper.py
rename to tests/unit/test_clonemapper.py
index b7814f8..bd8c8b0 100644
--- a/tests/test_clonemapper.py
+++ b/tests/unit/test_clonemapper.py
@@ -13,14 +13,9 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
-import logging
 import testtools
 from zuul.lib.clonemapper import CloneMapper
 
-logging.basicConfig(level=logging.DEBUG,
-                    format='%(asctime)s %(name)-17s '
-                    '%(levelname)-8s %(message)s')
-
 
 class TestCloneMapper(testtools.TestCase):
 
diff --git a/tests/test_cloner.py b/tests/unit/test_cloner.py
similarity index 97%
rename from tests/test_cloner.py
rename to tests/unit/test_cloner.py
index 896fcba..da0f774 100644
--- a/tests/test_cloner.py
+++ b/tests/unit/test_cloner.py
@@ -26,10 +26,6 @@
 
 from tests.base import ZuulTestCase
 
-logging.basicConfig(level=logging.DEBUG,
-                    format='%(asctime)s %(name)-32s '
-                    '%(levelname)-8s %(message)s')
-
 
 class TestCloner(ZuulTestCase):
 
@@ -37,11 +33,13 @@
     workspace_root = None
 
     def setUp(self):
+        self.skip("Disabled for early v3 development")
+
         super(TestCloner, self).setUp()
         self.workspace_root = os.path.join(self.test_root, 'workspace')
 
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-cloner.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-cloner.yaml')
         self.sched.reconfigure(self.config)
         self.registerJobs()
 
@@ -94,7 +92,7 @@
                 zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
-                zuul_url=self.git_root,
+                zuul_url=self.src_root,
                 cache_dir=cache_root,
             )
             cloner.execute()
@@ -176,7 +174,7 @@
                 zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
-                zuul_url=self.git_root,
+                zuul_url=self.src_root,
             )
             cloner.execute()
             work = self.getWorkspaceRepos(projects)
@@ -247,7 +245,7 @@
                 zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
-                zuul_url=self.git_root,
+                zuul_url=self.src_root,
             )
             cloner.execute()
             work = self.getWorkspaceRepos(projects)
@@ -362,7 +360,7 @@
                 zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
-                zuul_url=self.git_root,
+                zuul_url=self.src_root,
                 branch='stable/havana',  # Old branch for upgrade
             )
             cloner.execute()
@@ -425,7 +423,7 @@
                 zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
-                zuul_url=self.git_root,
+                zuul_url=self.src_root,
                 branch='master',  # New branch for upgrade
             )
             cloner.execute()
@@ -512,7 +510,7 @@
                 zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
-                zuul_url=self.git_root,
+                zuul_url=self.src_root,
                 project_branches={'org/project4': 'master'},
             )
             cloner.execute()
@@ -533,8 +531,8 @@
     def test_periodic(self):
         self.worker.hold_jobs_in_build = True
         self.create_branch('org/project', 'stable/havana')
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-timer.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-timer.yaml')
         self.sched.reconfigure(self.config)
         self.registerJobs()
 
@@ -548,8 +546,8 @@
         self.worker.hold_jobs_in_build = False
         # Stop queuing timer triggered jobs so that the assertions
         # below don't race against more jobs being queued.
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-no-timer.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-no-timer.yaml')
         self.sched.reconfigure(self.config)
         self.registerJobs()
         self.worker.release()
@@ -578,7 +576,7 @@
                 zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters.get('ZUUL_BRANCH', None),
                 zuul_ref=build.parameters.get('ZUUL_REF', None),
-                zuul_url=self.git_root,
+                zuul_url=self.src_root,
                 branch='stable/havana',
             )
             cloner.execute()
diff --git a/tests/cmd/test_cloner.py b/tests/unit/test_cloner_cmd.py
similarity index 91%
rename from tests/cmd/test_cloner.py
rename to tests/unit/test_cloner_cmd.py
index 9cbb5b8..2d8747f 100644
--- a/tests/cmd/test_cloner.py
+++ b/tests/unit/test_cloner_cmd.py
@@ -12,16 +12,11 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
-import logging
 import os
 
 import testtools
 import zuul.cmd.cloner
 
-logging.basicConfig(level=logging.DEBUG,
-                    format='%(asctime)s %(name)-32s '
-                    '%(levelname)-8s %(message)s')
-
 
 class TestClonerCmdArguments(testtools.TestCase):
 
diff --git a/tests/test_connection.py b/tests/unit/test_connection.py
similarity index 82%
rename from tests/test_connection.py
rename to tests/unit/test_connection.py
index f9f54f3..8954832 100644
--- a/tests/test_connection.py
+++ b/tests/unit/test_connection.py
@@ -12,13 +12,8 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
-import logging
-import testtools
-
 import sqlalchemy as sa
-
-import zuul.connection.gerrit
-import zuul.connection.sql
+from unittest import skip
 
 from tests.base import ZuulTestCase, ZuulDBTestCase
 
@@ -32,47 +27,32 @@
             return r
 
 
-class TestGerritConnection(testtools.TestCase):
-    log = logging.getLogger("zuul.test_connection")
+class TestConnections(ZuulTestCase):
+    config_file = 'zuul-connections-same-gerrit.conf'
+    tenant_config_file = 'config/zuul-connections-same-gerrit/main.yaml'
 
-    def test_driver_name(self):
-        self.assertEqual('gerrit',
-                         zuul.connection.gerrit.GerritConnection.driver_name)
-
-
-class TestSQLConnection(testtools.TestCase):
-    log = logging.getLogger("zuul.test_connection")
-
-    def test_driver_name(self):
-        self.assertEqual(
-            'sql',
-            zuul.connection.sql.SQLConnection.driver_name
-        )
-
-
-class TestConnections(ZuulDBTestCase):
     def test_multiple_gerrit_connections(self):
         "Test multiple connections to the one gerrit"
 
         A = self.fake_review_gerrit.addFakeChange('org/project', 'master', 'A')
-        self.fake_review_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.addEvent('review_gerrit', A.getPatchsetCreatedEvent(1))
 
         self.waitUntilSettled()
 
         self.assertEqual(len(A.patchsets[-1]['approvals']), 1)
-        self.assertEqual(A.patchsets[-1]['approvals'][0]['type'], 'VRFY')
+        self.assertEqual(A.patchsets[-1]['approvals'][0]['type'], 'verified')
         self.assertEqual(A.patchsets[-1]['approvals'][0]['value'], '1')
         self.assertEqual(A.patchsets[-1]['approvals'][0]['by']['username'],
                          'jenkins')
 
         B = self.fake_review_gerrit.addFakeChange('org/project', 'master', 'B')
-        self.worker.addFailTest('project-test2', B)
-        self.fake_review_gerrit.addEvent(B.getPatchsetCreatedEvent(1))
+        self.launch_server.failJob('project-test2', B)
+        self.addEvent('review_gerrit', B.getPatchsetCreatedEvent(1))
 
         self.waitUntilSettled()
 
         self.assertEqual(len(B.patchsets[-1]['approvals']), 1)
-        self.assertEqual(B.patchsets[-1]['approvals'][0]['type'], 'VRFY')
+        self.assertEqual(B.patchsets[-1]['approvals'][0]['type'], 'verified')
         self.assertEqual(B.patchsets[-1]['approvals'][0]['value'], '-1')
         self.assertEqual(B.patchsets[-1]['approvals'][0]['by']['username'],
                          'civoter')
@@ -88,6 +68,7 @@
         self.assertEqual(9, len(insp.get_columns(buildset_table)))
         self.assertEqual(10, len(insp.get_columns(build_table)))
 
+    @skip("Disabled for early v3 development")
     def test_sql_tables_created(self):
         "Test the default table is created"
         self.config.set('zuul', 'layout_config',
@@ -166,6 +147,7 @@
         self.assertEqual('http://logs.example.com/2/1/check/project-test1/4',
                          buildset1_builds[-2]['log_url'])
 
+    @skip("Disabled for early v3 development")
     def test_sql_results(self):
         "Test results are entered into the default sql table"
         self.config.set('zuul', 'layout_config',
@@ -173,6 +155,7 @@
         self.sched.reconfigure(self.config)
         self._test_sql_results()
 
+    @skip("Disabled for early v3 development")
     def test_multiple_sql_connections(self):
         "Test putting results in different databases"
         self.config.set('zuul', 'layout_config',
@@ -237,6 +220,7 @@
     def setup_config(self, config_file='zuul-connections-bad-sql.conf'):
         super(TestConnectionsBadSQL, self).setup_config(config_file)
 
+    @skip("Disabled for early v3 development")
     def test_unable_to_connect(self):
         "Test the SQL reporter fails gracefully when unable to connect"
         self.config.set('zuul', 'layout_config',
@@ -251,26 +235,47 @@
 
 
 class TestMultipleGerrits(ZuulTestCase):
-    def setup_config(self,
-                     config_file='zuul-connections-multiple-gerrits.conf'):
-        super(TestMultipleGerrits, self).setup_config(config_file)
-        self.config.set(
-            'zuul', 'layout_config',
-            'layout-connections-multiple-gerrits.yaml')
+    config_file = 'zuul-connections-multiple-gerrits.conf'
+    tenant_config_file = 'config/zuul-connections-multiple-gerrits/main.yaml'
 
     def test_multiple_project_separate_gerrits(self):
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_another_gerrit.addFakeChange(
-            'org/project', 'master', 'A')
+            'org/project1', 'master', 'A')
         self.fake_another_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
 
         self.waitUntilSettled()
 
-        self.assertEqual(1, len(self.builds))
-        self.assertEqual('project-another-gerrit', self.builds[0].name)
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
+        self.assertBuilds([dict(name='project-test2',
+                                changes='1,1',
+                                project='org/project1',
+                                pipeline='another_check')])
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        # NOTE(jamielennox): the tests back the git repo for both connections
+        # onto the same git repo on the file system. If we just create another
+        # fake change the fake_review_gerrit will try to create another 1,1
+        # change and git will fail to create the ref. Arbitrarily set it to get
+        # around the problem.
+        self.fake_review_gerrit.change_number = 50
+
+        B = self.fake_review_gerrit.addFakeChange(
+            'org/project1', 'master', 'B')
+        self.fake_review_gerrit.addEvent(B.getPatchsetCreatedEvent(1))
+
+        self.waitUntilSettled()
+
+        self.assertBuilds([
+            dict(name='project-test2',
+                 changes='1,1',
+                 project='org/project1',
+                 pipeline='another_check'),
+            dict(name='project-test1',
+                 changes='51,1',
+                 project='org/project1',
+                 pipeline='review_check'),
+        ])
+
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
diff --git a/tests/test_daemon.py b/tests/unit/test_daemon.py
similarity index 100%
rename from tests/test_daemon.py
rename to tests/unit/test_daemon.py
diff --git a/tests/test_gerrit.py b/tests/unit/test_gerrit.py
similarity index 88%
rename from tests/test_gerrit.py
rename to tests/unit/test_gerrit.py
index 93ce122..999e55d 100644
--- a/tests/test_gerrit.py
+++ b/tests/unit/test_gerrit.py
@@ -20,10 +20,11 @@
 except ImportError:
     import mock
 
+import tests.base
 from tests.base import BaseTestCase
-from zuul.connection.gerrit import GerritConnection
+from zuul.driver.gerrit.gerritconnection import GerritConnection
 
-FIXTURE_DIR = os.path.join(os.path.dirname(__file__), 'fixtures/gerrit')
+FIXTURE_DIR = os.path.join(tests.base.FIXTURE_DIR, 'gerrit')
 
 
 def read_fixture(file):
@@ -46,13 +47,13 @@
 
 class TestGerrit(BaseTestCase):
 
-    @mock.patch('zuul.connection.gerrit.GerritConnection._ssh')
+    @mock.patch('zuul.driver.gerrit.gerritconnection.GerritConnection._ssh')
     def run_query(self, files, expected_patches, _ssh_mock):
         gerrit_config = {
             'user': 'gerrit',
             'server': 'localhost',
         }
-        gerrit = GerritConnection('review_gerrit', gerrit_config)
+        gerrit = GerritConnection(None, 'review_gerrit', gerrit_config)
 
         calls, values = read_fixtures(files)
         _ssh_mock.side_effect = values
diff --git a/tests/unit/test_git_driver.py b/tests/unit/test_git_driver.py
new file mode 100644
index 0000000..4d75944
--- /dev/null
+++ b/tests/unit/test_git_driver.py
@@ -0,0 +1,42 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tests.base import ZuulTestCase
+
+
+class TestGitDriver(ZuulTestCase):
+    config_file = 'zuul-git-driver.conf'
+    tenant_config_file = 'config/git-driver/main.yaml'
+
+    def setup_config(self):
+        super(TestGitDriver, self).setup_config()
+        self.config.set('connection git', 'baseurl', self.upstream_root)
+
+    def test_git_driver(self):
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        # Check that we have the git source for common-config and the
+        # gerrit source for the project.
+        self.assertEqual('git', tenant.config_repos[0][0].name)
+        self.assertEqual('common-config', tenant.config_repos[0][1].name)
+        self.assertEqual('gerrit', tenant.project_repos[0][0].name)
+        self.assertEqual('org/project', tenant.project_repos[0][1].name)
+
+        # The configuration for this test is accessed via the git
+        # driver (in common-config), rather than the gerrit driver, so
+        # if the job runs, it worked.
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.assertEqual(len(self.history), 1)
+        self.assertEqual(A.reported, 1)
diff --git a/tests/test_layoutvalidator.py b/tests/unit/test_layoutvalidator.py
similarity index 97%
rename from tests/test_layoutvalidator.py
rename to tests/unit/test_layoutvalidator.py
index 46a8c7c..38c8e29 100644
--- a/tests/test_layoutvalidator.py
+++ b/tests/unit/test_layoutvalidator.py
@@ -31,6 +31,9 @@
 
 
 class TestLayoutValidator(testtools.TestCase):
+    def setUp(self):
+        self.skip("Disabled for early v3 development")
+
     def test_layouts(self):
         """Test layout file validation"""
         print()
diff --git a/tests/test_merger_repo.py b/tests/unit/test_merger_repo.py
similarity index 94%
rename from tests/test_merger_repo.py
rename to tests/unit/test_merger_repo.py
index 454f3cc..f815344 100644
--- a/tests/test_merger_repo.py
+++ b/tests/unit/test_merger_repo.py
@@ -23,14 +23,11 @@
 from zuul.merger.merger import Repo
 from tests.base import ZuulTestCase
 
-logging.basicConfig(level=logging.DEBUG,
-                    format='%(asctime)s %(name)-32s '
-                    '%(levelname)-8s %(message)s')
-
 
 class TestMergerRepo(ZuulTestCase):
 
     log = logging.getLogger("zuul.test.merger.repo")
+    tenant_config_file = 'config/single-tenant/main.yaml'
     workspace_root = None
 
     def setUp(self):
diff --git a/tests/unit/test_model.py b/tests/unit/test_model.py
new file mode 100644
index 0000000..9bd405e
--- /dev/null
+++ b/tests/unit/test_model.py
@@ -0,0 +1,592 @@
+# Copyright 2015 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+import os
+import random
+
+import fixtures
+import testtools
+
+from zuul import model
+from zuul import configloader
+
+from tests.base import BaseTestCase
+
+
+class TestJob(BaseTestCase):
+
+    def setUp(self):
+        super(TestJob, self).setUp()
+        self.project = model.Project('project', None)
+        self.context = model.SourceContext(self.project, 'master', True)
+
+    @property
+    def job(self):
+        tenant = model.Tenant('tenant')
+        layout = model.Layout()
+        project = model.Project('project', None)
+        context = model.SourceContext(project, 'master', True)
+        job = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'job',
+            'irrelevant-files': [
+                '^docs/.*$'
+            ]})
+        return job
+
+    def test_change_matches_returns_false_for_matched_skip_if(self):
+        change = model.Change('project')
+        change.files = ['/COMMIT_MSG', 'docs/foo']
+        self.assertFalse(self.job.changeMatches(change))
+
+    def test_change_matches_returns_true_for_unmatched_skip_if(self):
+        change = model.Change('project')
+        change.files = ['/COMMIT_MSG', 'foo']
+        self.assertTrue(self.job.changeMatches(change))
+
+    def test_job_sets_defaults_for_boolean_attributes(self):
+        self.assertIsNotNone(self.job.voting)
+
+    def test_job_inheritance(self):
+        # This is standard job inheritance.
+
+        base_pre = model.PlaybookContext(self.context, 'base-pre')
+        base_run = model.PlaybookContext(self.context, 'base-run')
+        base_post = model.PlaybookContext(self.context, 'base-post')
+
+        base = model.Job('base')
+        base.timeout = 30
+        base.pre_run = [base_pre]
+        base.run = [base_run]
+        base.post_run = [base_post]
+        base.auth = dict(foo='bar', inherit=False)
+
+        py27 = model.Job('py27')
+        self.assertEqual(None, py27.timeout)
+        py27.inheritFrom(base)
+        self.assertEqual(30, py27.timeout)
+        self.assertEqual(['base-pre'],
+                         [x.path for x in py27.pre_run])
+        self.assertEqual(['base-run'],
+                         [x.path for x in py27.run])
+        self.assertEqual(['base-post'],
+                         [x.path for x in py27.post_run])
+        self.assertEqual({}, py27.auth)
+
+    def test_job_variants(self):
+        # This simulates freezing a job.
+
+        py27_pre = model.PlaybookContext(self.context, 'py27-pre')
+        py27_run = model.PlaybookContext(self.context, 'py27-run')
+        py27_post = model.PlaybookContext(self.context, 'py27-post')
+
+        py27 = model.Job('py27')
+        py27.timeout = 30
+        py27.pre_run = [py27_pre]
+        py27.run = [py27_run]
+        py27.post_run = [py27_post]
+        auth = dict(foo='bar', inherit=False)
+        py27.auth = auth
+
+        job = py27.copy()
+        self.assertEqual(30, job.timeout)
+
+        # Apply the diablo variant
+        diablo = model.Job('py27')
+        diablo.timeout = 40
+        job.applyVariant(diablo)
+
+        self.assertEqual(40, job.timeout)
+        self.assertEqual(['py27-pre'],
+                         [x.path for x in job.pre_run])
+        self.assertEqual(['py27-run'],
+                         [x.path for x in job.run])
+        self.assertEqual(['py27-post'],
+                         [x.path for x in job.post_run])
+        self.assertEqual(auth, job.auth)
+
+        # Set the job to final for the following checks
+        job.final = True
+        self.assertTrue(job.voting)
+
+        good_final = model.Job('py27')
+        good_final.voting = False
+        job.applyVariant(good_final)
+        self.assertFalse(job.voting)
+
+        bad_final = model.Job('py27')
+        bad_final.timeout = 600
+        with testtools.ExpectedException(
+                Exception,
+                "Unable to modify final job"):
+            job.applyVariant(bad_final)
+
+    def test_job_inheritance_configloader(self):
+        # TODO(jeblair): move this to a configloader test
+        tenant = model.Tenant('tenant')
+        layout = model.Layout()
+
+        pipeline = model.Pipeline('gate', layout)
+        layout.addPipeline(pipeline)
+        queue = model.ChangeQueue(pipeline)
+        project = model.Project('project', None)
+        context = model.SourceContext(project, 'master', True)
+
+        base = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'base',
+            'timeout': 30,
+            'pre-run': 'base-pre',
+            'post-run': 'base-post',
+            'nodes': [{
+                'name': 'controller',
+                'image': 'base',
+            }],
+        })
+        layout.addJob(base)
+        python27 = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'python27',
+            'parent': 'base',
+            'pre-run': 'py27-pre',
+            'post-run': 'py27-post',
+            'nodes': [{
+                'name': 'controller',
+                'image': 'new',
+            }],
+            'timeout': 40,
+        })
+        layout.addJob(python27)
+        python27diablo = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'python27',
+            'branches': [
+                'stable/diablo'
+            ],
+            'pre-run': 'py27-diablo-pre',
+            'run': 'py27-diablo',
+            'post-run': 'py27-diablo-post',
+            'nodes': [{
+                'name': 'controller',
+                'image': 'old',
+            }],
+            'timeout': 50,
+        })
+        layout.addJob(python27diablo)
+
+        python27essex = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'python27',
+            'branches': [
+                'stable/essex'
+            ],
+            'pre-run': 'py27-essex-pre',
+            'post-run': 'py27-essex-post',
+        })
+        layout.addJob(python27essex)
+
+        project_config = configloader.ProjectParser.fromYaml(tenant, layout, [{
+            '_source_context': context,
+            'name': 'project',
+            'gate': {
+                'jobs': [
+                    'python27'
+                ]
+            }
+        }])
+        layout.addProjectConfig(project_config, update_pipeline=False)
+
+        change = model.Change(project)
+        # Test master
+        change.branch = 'master'
+        item = queue.enqueueChange(change)
+        item.current_build_set.layout = layout
+
+        self.assertTrue(base.changeMatches(change))
+        self.assertTrue(python27.changeMatches(change))
+        self.assertFalse(python27diablo.changeMatches(change))
+        self.assertFalse(python27essex.changeMatches(change))
+
+        item.freezeJobTree()
+        self.assertEqual(len(item.getJobs()), 1)
+        job = item.getJobs()[0]
+        self.assertEqual(job.name, 'python27')
+        self.assertEqual(job.timeout, 40)
+        nodes = job.nodeset.getNodes()
+        self.assertEqual(len(nodes), 1)
+        self.assertEqual(nodes[0].image, 'new')
+        self.assertEqual([x.path for x in job.pre_run],
+                         ['playbooks/base-pre',
+                          'playbooks/py27-pre'])
+        self.assertEqual([x.path for x in job.post_run],
+                         ['playbooks/py27-post',
+                          'playbooks/base-post'])
+        self.assertEqual([x.path for x in job.run],
+                         ['playbooks/python27',
+                          'playbooks/base'])
+
+        # Test diablo
+        change.branch = 'stable/diablo'
+        item = queue.enqueueChange(change)
+        item.current_build_set.layout = layout
+
+        self.assertTrue(base.changeMatches(change))
+        self.assertTrue(python27.changeMatches(change))
+        self.assertTrue(python27diablo.changeMatches(change))
+        self.assertFalse(python27essex.changeMatches(change))
+
+        item.freezeJobTree()
+        self.assertEqual(len(item.getJobs()), 1)
+        job = item.getJobs()[0]
+        self.assertEqual(job.name, 'python27')
+        self.assertEqual(job.timeout, 50)
+        nodes = job.nodeset.getNodes()
+        self.assertEqual(len(nodes), 1)
+        self.assertEqual(nodes[0].image, 'old')
+        self.assertEqual([x.path for x in job.pre_run],
+                         ['playbooks/base-pre',
+                          'playbooks/py27-pre',
+                          'playbooks/py27-diablo-pre'])
+        self.assertEqual([x.path for x in job.post_run],
+                         ['playbooks/py27-diablo-post',
+                          'playbooks/py27-post',
+                          'playbooks/base-post'])
+        self.assertEqual([x.path for x in job.run],
+                         ['playbooks/py27-diablo']),
+
+        # Test essex
+        change.branch = 'stable/essex'
+        item = queue.enqueueChange(change)
+        item.current_build_set.layout = layout
+
+        self.assertTrue(base.changeMatches(change))
+        self.assertTrue(python27.changeMatches(change))
+        self.assertFalse(python27diablo.changeMatches(change))
+        self.assertTrue(python27essex.changeMatches(change))
+
+        item.freezeJobTree()
+        self.assertEqual(len(item.getJobs()), 1)
+        job = item.getJobs()[0]
+        self.assertEqual(job.name, 'python27')
+        self.assertEqual([x.path for x in job.pre_run],
+                         ['playbooks/base-pre',
+                          'playbooks/py27-pre',
+                          'playbooks/py27-essex-pre'])
+        self.assertEqual([x.path for x in job.post_run],
+                         ['playbooks/py27-essex-post',
+                          'playbooks/py27-post',
+                          'playbooks/base-post'])
+        self.assertEqual([x.path for x in job.run],
+                         ['playbooks/python27',
+                          'playbooks/base'])
+
+    def test_job_auth_inheritance(self):
+        tenant = model.Tenant('tenant')
+        layout = model.Layout()
+        project = model.Project('project', None)
+        context = model.SourceContext(project, 'master', True)
+
+        base = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'base',
+            'timeout': 30,
+        })
+        layout.addJob(base)
+        pypi_upload_without_inherit = configloader.JobParser.fromYaml(
+            tenant, layout, {
+                '_source_context': context,
+                'name': 'pypi-upload-without-inherit',
+                'parent': 'base',
+                'timeout': 40,
+                'auth': {
+                    'secrets': [
+                        'pypi-credentials',
+                    ]
+                }
+            })
+        layout.addJob(pypi_upload_without_inherit)
+        pypi_upload_with_inherit = configloader.JobParser.fromYaml(
+            tenant, layout, {
+                '_source_context': context,
+                'name': 'pypi-upload-with-inherit',
+                'parent': 'base',
+                'timeout': 40,
+                'auth': {
+                    'inherit': True,
+                    'secrets': [
+                        'pypi-credentials',
+                    ]
+                }
+            })
+        layout.addJob(pypi_upload_with_inherit)
+        pypi_upload_with_inherit_false = configloader.JobParser.fromYaml(
+            tenant, layout, {
+                '_source_context': context,
+                'name': 'pypi-upload-with-inherit-false',
+                'parent': 'base',
+                'timeout': 40,
+                'auth': {
+                    'inherit': False,
+                    'secrets': [
+                        'pypi-credentials',
+                    ]
+                }
+            })
+        layout.addJob(pypi_upload_with_inherit_false)
+        in_repo_job_without_inherit = configloader.JobParser.fromYaml(
+            tenant, layout, {
+                '_source_context': context,
+                'name': 'in-repo-job-without-inherit',
+                'parent': 'pypi-upload-without-inherit',
+            })
+        layout.addJob(in_repo_job_without_inherit)
+        in_repo_job_with_inherit = configloader.JobParser.fromYaml(
+            tenant, layout, {
+                '_source_context': context,
+                'name': 'in-repo-job-with-inherit',
+                'parent': 'pypi-upload-with-inherit',
+            })
+        layout.addJob(in_repo_job_with_inherit)
+        in_repo_job_with_inherit_false = configloader.JobParser.fromYaml(
+            tenant, layout, {
+                '_source_context': context,
+                'name': 'in-repo-job-with-inherit-false',
+                'parent': 'pypi-upload-with-inherit-false',
+            })
+        layout.addJob(in_repo_job_with_inherit_false)
+
+        self.assertNotIn('auth', in_repo_job_without_inherit.auth)
+        self.assertIn('secrets', in_repo_job_with_inherit.auth)
+        self.assertEquals(in_repo_job_with_inherit.auth['secrets'],
+                          ['pypi-credentials'])
+        self.assertNotIn('auth', in_repo_job_with_inherit_false.auth)
+
+    def test_job_inheritance_job_tree(self):
+        tenant = model.Tenant('tenant')
+        layout = model.Layout()
+
+        pipeline = model.Pipeline('gate', layout)
+        layout.addPipeline(pipeline)
+        queue = model.ChangeQueue(pipeline)
+        project = model.Project('project', None)
+        context = model.SourceContext(project, 'master', True)
+
+        base = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'base',
+            'timeout': 30,
+        })
+        layout.addJob(base)
+        python27 = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'python27',
+            'parent': 'base',
+            'timeout': 40,
+        })
+        layout.addJob(python27)
+        python27diablo = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'python27',
+            'branches': [
+                'stable/diablo'
+            ],
+            'timeout': 50,
+        })
+        layout.addJob(python27diablo)
+
+        project_config = configloader.ProjectParser.fromYaml(tenant, layout, [{
+            '_source_context': context,
+            'name': 'project',
+            'gate': {
+                'jobs': [
+                    {'python27': {'timeout': 70}}
+                ]
+            }
+        }])
+        layout.addProjectConfig(project_config, update_pipeline=False)
+
+        change = model.Change(project)
+        change.branch = 'master'
+        item = queue.enqueueChange(change)
+        item.current_build_set.layout = layout
+
+        self.assertTrue(base.changeMatches(change))
+        self.assertTrue(python27.changeMatches(change))
+        self.assertFalse(python27diablo.changeMatches(change))
+
+        item.freezeJobTree()
+        self.assertEqual(len(item.getJobs()), 1)
+        job = item.getJobs()[0]
+        self.assertEqual(job.name, 'python27')
+        self.assertEqual(job.timeout, 70)
+
+        change.branch = 'stable/diablo'
+        item = queue.enqueueChange(change)
+        item.current_build_set.layout = layout
+
+        self.assertTrue(base.changeMatches(change))
+        self.assertTrue(python27.changeMatches(change))
+        self.assertTrue(python27diablo.changeMatches(change))
+
+        item.freezeJobTree()
+        self.assertEqual(len(item.getJobs()), 1)
+        job = item.getJobs()[0]
+        self.assertEqual(job.name, 'python27')
+        self.assertEqual(job.timeout, 70)
+
+    def test_inheritance_keeps_matchers(self):
+        tenant = model.Tenant('tenant')
+        layout = model.Layout()
+
+        pipeline = model.Pipeline('gate', layout)
+        layout.addPipeline(pipeline)
+        queue = model.ChangeQueue(pipeline)
+        project = model.Project('project', None)
+        context = model.SourceContext(project, 'master', True)
+
+        base = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'base',
+            'timeout': 30,
+        })
+        layout.addJob(base)
+        python27 = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': context,
+            'name': 'python27',
+            'parent': 'base',
+            'timeout': 40,
+            'irrelevant-files': ['^ignored-file$'],
+        })
+        layout.addJob(python27)
+
+        project_config = configloader.ProjectParser.fromYaml(tenant, layout, [{
+            '_source_context': context,
+            'name': 'project',
+            'gate': {
+                'jobs': [
+                    'python27',
+                ]
+            }
+        }])
+        layout.addProjectConfig(project_config, update_pipeline=False)
+
+        change = model.Change(project)
+        change.branch = 'master'
+        change.files = ['/COMMIT_MSG', 'ignored-file']
+        item = queue.enqueueChange(change)
+        item.current_build_set.layout = layout
+
+        self.assertTrue(base.changeMatches(change))
+        self.assertFalse(python27.changeMatches(change))
+
+        item.freezeJobTree()
+        self.assertEqual([], item.getJobs())
+
+    def test_job_source_project(self):
+        tenant = model.Tenant('tenant')
+        layout = model.Layout()
+        base_project = model.Project('base_project', None)
+        base_context = model.SourceContext(base_project, 'master', True)
+
+        base = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': base_context,
+            'name': 'base',
+        })
+        layout.addJob(base)
+
+        other_project = model.Project('other_project', None)
+        other_context = model.SourceContext(other_project, 'master', True)
+        base2 = configloader.JobParser.fromYaml(tenant, layout, {
+            '_source_context': other_context,
+            'name': 'base',
+        })
+        with testtools.ExpectedException(
+                Exception,
+                "Job base in other_project is not permitted "
+                "to shadow job base in base_project"):
+            layout.addJob(base2)
+
+
+class TestJobTimeData(BaseTestCase):
+    def setUp(self):
+        super(TestJobTimeData, self).setUp()
+        self.tmp_root = self.useFixture(fixtures.TempDir(
+            rootdir=os.environ.get("ZUUL_TEST_ROOT"))
+        ).path
+
+    def test_empty_timedata(self):
+        path = os.path.join(self.tmp_root, 'job-name')
+        self.assertFalse(os.path.exists(path))
+        self.assertFalse(os.path.exists(path + '.tmp'))
+        td = model.JobTimeData(path)
+        self.assertEqual(td.success_times, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
+        self.assertEqual(td.failure_times, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
+        self.assertEqual(td.results, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
+
+    def test_save_reload(self):
+        path = os.path.join(self.tmp_root, 'job-name')
+        self.assertFalse(os.path.exists(path))
+        self.assertFalse(os.path.exists(path + '.tmp'))
+        td = model.JobTimeData(path)
+        self.assertEqual(td.success_times, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
+        self.assertEqual(td.failure_times, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
+        self.assertEqual(td.results, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
+        success_times = []
+        failure_times = []
+        results = []
+        for x in range(10):
+            success_times.append(int(random.random() * 1000))
+            failure_times.append(int(random.random() * 1000))
+            results.append(0)
+            results.append(1)
+        random.shuffle(results)
+        s = f = 0
+        for result in results:
+            if result:
+                td.add(failure_times[f], 'FAILURE')
+                f += 1
+            else:
+                td.add(success_times[s], 'SUCCESS')
+                s += 1
+        self.assertEqual(td.success_times, success_times)
+        self.assertEqual(td.failure_times, failure_times)
+        self.assertEqual(td.results, results[10:])
+        td.save()
+        self.assertTrue(os.path.exists(path))
+        self.assertFalse(os.path.exists(path + '.tmp'))
+        td = model.JobTimeData(path)
+        td.load()
+        self.assertEqual(td.success_times, success_times)
+        self.assertEqual(td.failure_times, failure_times)
+        self.assertEqual(td.results, results[10:])
+
+
+class TestTimeDataBase(BaseTestCase):
+    def setUp(self):
+        super(TestTimeDataBase, self).setUp()
+        self.tmp_root = self.useFixture(fixtures.TempDir(
+            rootdir=os.environ.get("ZUUL_TEST_ROOT"))
+        ).path
+        self.db = model.TimeDataBase(self.tmp_root)
+
+    def test_timedatabase(self):
+        self.assertEqual(self.db.getEstimatedTime('job-name'), 0)
+        self.db.update('job-name', 50, 'SUCCESS')
+        self.assertEqual(self.db.getEstimatedTime('job-name'), 50)
+        self.db.update('job-name', 100, 'SUCCESS')
+        self.assertEqual(self.db.getEstimatedTime('job-name'), 75)
+        for x in range(10):
+            self.db.update('job-name', 100, 'SUCCESS')
+        self.assertEqual(self.db.getEstimatedTime('job-name'), 100)
diff --git a/tests/unit/test_nodepool.py b/tests/unit/test_nodepool.py
new file mode 100644
index 0000000..19c7e05
--- /dev/null
+++ b/tests/unit/test_nodepool.py
@@ -0,0 +1,123 @@
+# Copyright 2017 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+import time
+
+import zuul.zk
+import zuul.nodepool
+from zuul import model
+
+from tests.base import BaseTestCase, ChrootedKazooFixture, FakeNodepool
+
+
+class TestNodepool(BaseTestCase):
+    # Tests the Nodepool interface class using a fake nodepool and
+    # scheduler.
+
+    def setUp(self):
+        super(BaseTestCase, self).setUp()
+
+        self.zk_chroot_fixture = self.useFixture(ChrootedKazooFixture())
+        self.zk_config = '%s:%s%s' % (
+            self.zk_chroot_fixture.zookeeper_host,
+            self.zk_chroot_fixture.zookeeper_port,
+            self.zk_chroot_fixture.zookeeper_chroot)
+
+        self.zk = zuul.zk.ZooKeeper()
+        self.zk.connect(self.zk_config)
+
+        self.provisioned_requests = []
+        # This class implements the scheduler methods zuul.nodepool
+        # needs, so we pass 'self' as the scheduler.
+        self.nodepool = zuul.nodepool.Nodepool(self)
+
+        self.fake_nodepool = FakeNodepool(
+            self.zk_chroot_fixture.zookeeper_host,
+            self.zk_chroot_fixture.zookeeper_port,
+            self.zk_chroot_fixture.zookeeper_chroot)
+
+    def waitForRequests(self):
+        # Wait until all requests are complete.
+        while self.nodepool.requests:
+            time.sleep(0.1)
+
+    def onNodesProvisioned(self, request):
+        # This is a scheduler method that the nodepool class calls
+        # back when a request is provisioned.
+        self.provisioned_requests.append(request)
+
+    def test_node_request(self):
+        # Test a simple node request
+
+        nodeset = model.NodeSet()
+        nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
+        nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+        job = model.Job('testjob')
+        job.nodeset = nodeset
+        request = self.nodepool.requestNodes(None, job)
+        self.waitForRequests()
+        self.assertEqual(len(self.provisioned_requests), 1)
+        self.assertEqual(request.state, 'fulfilled')
+
+        # Accept the nodes
+        self.nodepool.acceptNodes(request)
+        nodeset = request.nodeset
+
+        for node in nodeset.getNodes():
+            self.assertIsNotNone(node.lock)
+            self.assertEqual(node.state, 'ready')
+
+        # Mark the nodes in use
+        self.nodepool.useNodeSet(nodeset)
+        for node in nodeset.getNodes():
+            self.assertEqual(node.state, 'in-use')
+
+        # Return the nodes
+        self.nodepool.returnNodeSet(nodeset)
+        for node in nodeset.getNodes():
+            self.assertIsNone(node.lock)
+            self.assertEqual(node.state, 'used')
+
+    def test_node_request_disconnect(self):
+        # Test that node requests are re-submitted after disconnect
+
+        nodeset = model.NodeSet()
+        nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
+        nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+        job = model.Job('testjob')
+        job.nodeset = nodeset
+        self.fake_nodepool.paused = True
+        request = self.nodepool.requestNodes(None, job)
+        self.zk.client.stop()
+        self.zk.client.start()
+        self.fake_nodepool.paused = False
+        self.waitForRequests()
+        self.assertEqual(len(self.provisioned_requests), 1)
+        self.assertEqual(request.state, 'fulfilled')
+
+    def test_node_request_canceled(self):
+        # Test that node requests can be canceled
+
+        nodeset = model.NodeSet()
+        nodeset.addNode(model.Node('controller', 'ubuntu-xenial'))
+        nodeset.addNode(model.Node('compute', 'ubuntu-xenial'))
+        job = model.Job('testjob')
+        job.nodeset = nodeset
+        self.fake_nodepool.paused = True
+        request = self.nodepool.requestNodes(None, job)
+        self.nodepool.cancelRequest(request)
+
+        self.waitForRequests()
+        self.assertEqual(len(self.provisioned_requests), 0)
diff --git a/tests/unit/test_openstack.py b/tests/unit/test_openstack.py
new file mode 100644
index 0000000..670e578
--- /dev/null
+++ b/tests/unit/test_openstack.py
@@ -0,0 +1,100 @@
+#!/usr/bin/env python
+
+# Copyright 2012 Hewlett-Packard Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+
+from tests.base import AnsibleZuulTestCase
+
+
+class TestOpenStack(AnsibleZuulTestCase):
+    # A temporary class to experiment with how openstack can use
+    # Zuulv3
+
+    tenant_config_file = 'config/openstack/main.yaml'
+
+    def test_nova_master(self):
+        A = self.fake_gerrit.addFakeChange('openstack/nova', 'master', 'A')
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertEqual(self.getJobFromHistory('python27').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('python35').result,
+                         'SUCCESS')
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and success")
+        self.assertEqual(self.getJobFromHistory('python27').node,
+                         'ubuntu-xenial')
+
+    def test_nova_mitaka(self):
+        self.create_branch('openstack/nova', 'stable/mitaka')
+        A = self.fake_gerrit.addFakeChange('openstack/nova',
+                                           'stable/mitaka', 'A')
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertEqual(self.getJobFromHistory('python27').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('python35').result,
+                         'SUCCESS')
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and success")
+        self.assertEqual(self.getJobFromHistory('python27').node,
+                         'ubuntu-trusty')
+
+    def test_dsvm_keystone_repo(self):
+        self.launch_server.keep_jobdir = True
+        A = self.fake_gerrit.addFakeChange('openstack/nova', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.assertHistory([
+            dict(name='dsvm', result='SUCCESS', changes='1,1')])
+        build = self.getJobFromHistory('dsvm')
+
+        # Check that a change to nova triggered a keystone clone
+        launcher_git_dir = os.path.join(self.launcher_src_root,
+                                        'openstack', 'keystone', '.git')
+        self.assertTrue(os.path.exists(launcher_git_dir),
+                        msg='openstack/keystone should be cloned.')
+
+        jobdir_git_dir = os.path.join(build.jobdir.src_root,
+                                      'openstack', 'keystone', '.git')
+        self.assertTrue(os.path.exists(jobdir_git_dir),
+                        msg='openstack/keystone should be cloned.')
+
+    def test_dsvm_nova_repo(self):
+        self.launch_server.keep_jobdir = True
+        A = self.fake_gerrit.addFakeChange('openstack/keystone', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.assertHistory([
+            dict(name='dsvm', result='SUCCESS', changes='1,1')])
+        build = self.getJobFromHistory('dsvm')
+
+        # Check that a change to keystone triggered a nova clone
+        launcher_git_dir = os.path.join(self.launcher_src_root,
+                                        'openstack', 'nova', '.git')
+        self.assertTrue(os.path.exists(launcher_git_dir),
+                        msg='openstack/nova should be cloned.')
+
+        jobdir_git_dir = os.path.join(build.jobdir.src_root,
+                                      'openstack', 'nova', '.git')
+        self.assertTrue(os.path.exists(jobdir_git_dir),
+                        msg='openstack/nova should be cloned.')
diff --git a/tests/test_requirements.py b/tests/unit/test_requirements.py
similarity index 68%
rename from tests/test_requirements.py
rename to tests/unit/test_requirements.py
index 81814bf..7e578cf 100644
--- a/tests/test_requirements.py
+++ b/tests/unit/test_requirements.py
@@ -14,232 +14,230 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
-import logging
 import time
 
 from tests.base import ZuulTestCase
 
-logging.basicConfig(level=logging.DEBUG,
-                    format='%(asctime)s %(name)-32s '
-                    '%(levelname)-8s %(message)s')
 
+class TestRequirementsApprovalNewerThan(ZuulTestCase):
+    """Requirements with a newer-than comment requirement"""
 
-class TestRequirements(ZuulTestCase):
-    """Test pipeline and trigger requirements"""
+    tenant_config_file = 'config/requirements/newer-than/main.yaml'
 
     def test_pipeline_require_approval_newer_than(self):
         "Test pipeline requirement: approval newer than"
         return self._test_require_approval_newer_than('org/project1',
-                                                      'project1-pipeline')
+                                                      'project1-job')
 
     def test_trigger_require_approval_newer_than(self):
         "Test trigger requirement: approval newer than"
         return self._test_require_approval_newer_than('org/project2',
-                                                      'project2-trigger')
+                                                      'project2-job')
 
     def _test_require_approval_newer_than(self, project, job):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-newer-than.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
         # A comment event that we will keep submitting to trigger
-        comment = A.addApproval('CRVW', 2, username='nobody')
+        comment = A.addApproval('code-review', 2, username='nobody')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         # No +1 from Jenkins so should not be enqueued
         self.assertEqual(len(self.history), 0)
 
         # Add a too-old +1, should not be enqueued
-        A.addApproval('VRFY', 1, username='jenkins',
+        A.addApproval('verified', 1, username='jenkins',
                       granted_on=time.time() - 72 * 60 * 60)
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # Add a recent +1
-        self.fake_gerrit.addEvent(A.addApproval('VRFY', 1, username='jenkins'))
+        self.fake_gerrit.addEvent(A.addApproval('verified', 1,
+                                                username='jenkins'))
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
         self.assertEqual(self.history[0].name, job)
 
+
+class TestRequirementsApprovalOlderThan(ZuulTestCase):
+    """Requirements with a older-than comment requirement"""
+
+    tenant_config_file = 'config/requirements/older-than/main.yaml'
+
     def test_pipeline_require_approval_older_than(self):
         "Test pipeline requirement: approval older than"
         return self._test_require_approval_older_than('org/project1',
-                                                      'project1-pipeline')
+                                                      'project1-job')
 
     def test_trigger_require_approval_older_than(self):
         "Test trigger requirement: approval older than"
         return self._test_require_approval_older_than('org/project2',
-                                                      'project2-trigger')
+                                                      'project2-job')
 
     def _test_require_approval_older_than(self, project, job):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-older-than.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
         # A comment event that we will keep submitting to trigger
-        comment = A.addApproval('CRVW', 2, username='nobody')
+        comment = A.addApproval('code-review', 2, username='nobody')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         # No +1 from Jenkins so should not be enqueued
         self.assertEqual(len(self.history), 0)
 
         # Add a recent +1 which should not be enqueued
-        A.addApproval('VRFY', 1)
+        A.addApproval('verified', 1)
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # Add an old +1 which should be enqueued
-        A.addApproval('VRFY', 1, username='jenkins',
+        A.addApproval('verified', 1, username='jenkins',
                       granted_on=time.time() - 72 * 60 * 60)
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
         self.assertEqual(self.history[0].name, job)
 
+
+class TestRequirementsUserName(ZuulTestCase):
+    """Requirements with a username requirement"""
+
+    tenant_config_file = 'config/requirements/username/main.yaml'
+
     def test_pipeline_require_approval_username(self):
         "Test pipeline requirement: approval username"
         return self._test_require_approval_username('org/project1',
-                                                    'project1-pipeline')
+                                                    'project1-job')
 
     def test_trigger_require_approval_username(self):
         "Test trigger requirement: approval username"
         return self._test_require_approval_username('org/project2',
-                                                    'project2-trigger')
+                                                    'project2-job')
 
     def _test_require_approval_username(self, project, job):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-username.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
         # A comment event that we will keep submitting to trigger
-        comment = A.addApproval('CRVW', 2, username='nobody')
+        comment = A.addApproval('code-review', 2, username='nobody')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         # No approval from Jenkins so should not be enqueued
         self.assertEqual(len(self.history), 0)
 
         # Add an approval from Jenkins
-        A.addApproval('VRFY', 1, username='jenkins')
+        A.addApproval('verified', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
         self.assertEqual(self.history[0].name, job)
 
+
+class TestRequirementsEmail(ZuulTestCase):
+    """Requirements with a email requirement"""
+
+    tenant_config_file = 'config/requirements/email/main.yaml'
+
     def test_pipeline_require_approval_email(self):
         "Test pipeline requirement: approval email"
         return self._test_require_approval_email('org/project1',
-                                                 'project1-pipeline')
+                                                 'project1-job')
 
     def test_trigger_require_approval_email(self):
         "Test trigger requirement: approval email"
         return self._test_require_approval_email('org/project2',
-                                                 'project2-trigger')
+                                                 'project2-job')
 
     def _test_require_approval_email(self, project, job):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-email.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
         # A comment event that we will keep submitting to trigger
-        comment = A.addApproval('CRVW', 2, username='nobody')
+        comment = A.addApproval('code-review', 2, username='nobody')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         # No approval from Jenkins so should not be enqueued
         self.assertEqual(len(self.history), 0)
 
         # Add an approval from Jenkins
-        A.addApproval('VRFY', 1, username='jenkins')
+        A.addApproval('verified', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
         self.assertEqual(self.history[0].name, job)
 
+
+class TestRequirementsVote1(ZuulTestCase):
+    """Requirements with a voting requirement"""
+
+    tenant_config_file = 'config/requirements/vote1/main.yaml'
+
     def test_pipeline_require_approval_vote1(self):
         "Test pipeline requirement: approval vote with one value"
         return self._test_require_approval_vote1('org/project1',
-                                                 'project1-pipeline')
+                                                 'project1-job')
 
     def test_trigger_require_approval_vote1(self):
         "Test trigger requirement: approval vote with one value"
         return self._test_require_approval_vote1('org/project2',
-                                                 'project2-trigger')
+                                                 'project2-job')
 
     def _test_require_approval_vote1(self, project, job):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-vote1.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
         # A comment event that we will keep submitting to trigger
-        comment = A.addApproval('CRVW', 2, username='nobody')
+        comment = A.addApproval('code-review', 2, username='nobody')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         # No approval from Jenkins so should not be enqueued
         self.assertEqual(len(self.history), 0)
 
         # A -1 from jenkins should not cause it to be enqueued
-        A.addApproval('VRFY', -1, username='jenkins')
+        A.addApproval('verified', -1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # A +1 should allow it to be enqueued
-        A.addApproval('VRFY', 1, username='jenkins')
+        A.addApproval('verified', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
         self.assertEqual(self.history[0].name, job)
 
+
+class TestRequirementsVote2(ZuulTestCase):
+    """Requirements with a voting requirement"""
+
+    tenant_config_file = 'config/requirements/vote2/main.yaml'
+
     def test_pipeline_require_approval_vote2(self):
         "Test pipeline requirement: approval vote with two values"
         return self._test_require_approval_vote2('org/project1',
-                                                 'project1-pipeline')
+                                                 'project1-job')
 
     def test_trigger_require_approval_vote2(self):
         "Test trigger requirement: approval vote with two values"
         return self._test_require_approval_vote2('org/project2',
-                                                 'project2-trigger')
+                                                 'project2-job')
 
     def _test_require_approval_vote2(self, project, job):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-vote2.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
         # A comment event that we will keep submitting to trigger
-        comment = A.addApproval('CRVW', 2, username='nobody')
+        comment = A.addApproval('code-review', 2, username='nobody')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         # No approval from Jenkins so should not be enqueued
         self.assertEqual(len(self.history), 0)
 
         # A -1 from jenkins should not cause it to be enqueued
-        A.addApproval('VRFY', -1, username='jenkins')
+        A.addApproval('verified', -1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # A -2 from jenkins should not cause it to be enqueued
-        A.addApproval('VRFY', -2, username='jenkins')
+        A.addApproval('verified', -2, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # A +1 from jenkins should allow it to be enqueued
-        A.addApproval('VRFY', 1, username='jenkins')
+        A.addApproval('verified', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
@@ -248,33 +246,33 @@
         # A +2 from nobody should not cause it to be enqueued
         B = self.fake_gerrit.addFakeChange(project, 'master', 'B')
         # A comment event that we will keep submitting to trigger
-        comment = B.addApproval('CRVW', 2, username='nobody')
+        comment = B.addApproval('code-review', 2, username='nobody')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
 
         # A +2 from jenkins should allow it to be enqueued
-        B.addApproval('VRFY', 2, username='jenkins')
+        B.addApproval('verified', 2, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 2)
         self.assertEqual(self.history[1].name, job)
 
+
+class TestRequirementsState(ZuulTestCase):
+    """Requirements with simple state requirement"""
+
+    tenant_config_file = 'config/requirements/state/main.yaml'
+
     def test_pipeline_require_current_patchset(self):
-        "Test pipeline requirement: current-patchset"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-'
-                        'current-patchset.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
         # Create two patchsets and let their tests settle out. Then
         # comment on first patchset and check that no additional
         # jobs are run.
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        self.fake_gerrit.addEvent(A.addApproval('CRVW', 1))
+        A = self.fake_gerrit.addFakeChange('current-project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.addApproval('code-review', 1))
         self.waitUntilSettled()
         A.addPatchset()
-        self.fake_gerrit.addEvent(A.addApproval('CRVW', 1))
+        self.fake_gerrit.addEvent(A.addApproval('code-review', 1))
         self.waitUntilSettled()
 
         self.assertEqual(len(self.history), 2)  # one job for each ps
@@ -290,70 +288,59 @@
         self.assertEqual(len(self.history), 3)
 
     def test_pipeline_require_open(self):
-        "Test pipeline requirement: open"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-open.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A',
+        A = self.fake_gerrit.addFakeChange('open-project', 'master', 'A',
                                            status='MERGED')
-        self.fake_gerrit.addEvent(A.addApproval('CRVW', 2))
+        self.fake_gerrit.addEvent(A.addApproval('code-review', 2))
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
-        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        self.fake_gerrit.addEvent(B.addApproval('CRVW', 2))
+        B = self.fake_gerrit.addFakeChange('open-project', 'master', 'B')
+        self.fake_gerrit.addEvent(B.addApproval('code-review', 2))
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
 
     def test_pipeline_require_status(self):
-        "Test pipeline requirement: status"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-requirement-status.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A',
+        A = self.fake_gerrit.addFakeChange('status-project', 'master', 'A',
                                            status='MERGED')
-        self.fake_gerrit.addEvent(A.addApproval('CRVW', 2))
+        self.fake_gerrit.addEvent(A.addApproval('code-review', 2))
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
-        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        self.fake_gerrit.addEvent(B.addApproval('CRVW', 2))
+        B = self.fake_gerrit.addFakeChange('status-project', 'master', 'B')
+        self.fake_gerrit.addEvent(B.addApproval('code-review', 2))
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
 
+
+class TestRequirementsRejectUsername(ZuulTestCase):
+    """Requirements with reject username requirement"""
+
+    tenant_config_file = 'config/requirements/reject-username/main.yaml'
+
     def _test_require_reject_username(self, project, job):
         "Test negative username's match"
         # Should only trigger if Jenkins hasn't voted.
-        self.config.set(
-            'zuul', 'layout_config',
-            'tests/fixtures/layout-requirement-reject-username.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         # add in a change with no comments
         A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # add in a comment that will trigger
-        self.fake_gerrit.addEvent(A.addApproval('CRVW', 1,
+        self.fake_gerrit.addEvent(A.addApproval('code-review', 1,
                                                 username='reviewer'))
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
         self.assertEqual(self.history[0].name, job)
 
         # add in a comment from jenkins user which shouldn't trigger
-        self.fake_gerrit.addEvent(A.addApproval('VRFY', 1, username='jenkins'))
+        self.fake_gerrit.addEvent(A.addApproval('verified', 1,
+                                                username='jenkins'))
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
 
         # Check future reviews also won't trigger as a 'jenkins' user has
         # commented previously
-        self.fake_gerrit.addEvent(A.addApproval('CRVW', 1,
+        self.fake_gerrit.addEvent(A.addApproval('code-review', 1,
                                                 username='reviewer'))
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
@@ -361,59 +348,59 @@
     def test_pipeline_reject_username(self):
         "Test negative pipeline requirement: no comment from jenkins"
         return self._test_require_reject_username('org/project1',
-                                                  'project1-pipeline')
+                                                  'project1-job')
 
     def test_trigger_reject_username(self):
         "Test negative trigger requirement: no comment from jenkins"
         return self._test_require_reject_username('org/project2',
-                                                  'project2-trigger')
+                                                  'project2-job')
+
+
+class TestRequirementsReject(ZuulTestCase):
+    """Requirements with reject requirement"""
+
+    tenant_config_file = 'config/requirements/reject/main.yaml'
 
     def _test_require_reject(self, project, job):
         "Test no approval matches a reject param"
-        self.config.set(
-            'zuul', 'layout_config',
-            'tests/fixtures/layout-requirement-reject.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # First positive vote should not queue until jenkins has +1'd
-        comment = A.addApproval('VRFY', 1, username='reviewer_a')
+        comment = A.addApproval('verified', 1, username='reviewer_a')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # Jenkins should put in a +1 which will also queue
-        comment = A.addApproval('VRFY', 1, username='jenkins')
+        comment = A.addApproval('verified', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
         self.assertEqual(self.history[0].name, job)
 
         # Negative vote should not queue
-        comment = A.addApproval('VRFY', -1, username='reviewer_b')
+        comment = A.addApproval('verified', -1, username='reviewer_b')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
 
         # Future approvals should do nothing
-        comment = A.addApproval('VRFY', 1, username='reviewer_c')
+        comment = A.addApproval('verified', 1, username='reviewer_c')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
 
         # Change/update negative vote should queue
-        comment = A.addApproval('VRFY', 1, username='reviewer_b')
+        comment = A.addApproval('verified', 1, username='reviewer_b')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 2)
         self.assertEqual(self.history[1].name, job)
 
         # Future approvals should also queue
-        comment = A.addApproval('VRFY', 1, username='reviewer_d')
+        comment = A.addApproval('verified', 1, username='reviewer_d')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 3)
@@ -421,8 +408,8 @@
 
     def test_pipeline_require_reject(self):
         "Test pipeline requirement: rejections absent"
-        return self._test_require_reject('org/project1', 'project1-pipeline')
+        return self._test_require_reject('org/project1', 'project1-job')
 
     def test_trigger_require_reject(self):
         "Test trigger requirement: rejections absent"
-        return self._test_require_reject('org/project2', 'project2-trigger')
+        return self._test_require_reject('org/project2', 'project2-job')
diff --git a/tests/test_scheduler.py b/tests/unit/test_scheduler.py
similarity index 68%
rename from tests/test_scheduler.py
rename to tests/unit/test_scheduler.py
index d205395..2837cfe 100755
--- a/tests/test_scheduler.py
+++ b/tests/unit/test_scheduler.py
@@ -15,12 +15,11 @@
 # under the License.
 
 import json
-import logging
 import os
 import re
 import shutil
 import time
-import yaml
+from unittest import skip
 
 import git
 from six.moves import urllib
@@ -29,27 +28,23 @@
 import zuul.change_matcher
 import zuul.scheduler
 import zuul.rpcclient
-import zuul.reporter.gerrit
-import zuul.reporter.smtp
+import zuul.model
 
 from tests.base import (
     ZuulTestCase,
     repack_repo,
 )
 
-logging.basicConfig(level=logging.DEBUG,
-                    format='%(asctime)s %(name)-32s '
-                    '%(levelname)-8s %(message)s')
-
 
 class TestScheduler(ZuulTestCase):
+    tenant_config_file = 'config/single-tenant/main.yaml'
 
     def test_jobs_launched(self):
         "Test that jobs are launched and a change is merged"
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(self.getJobFromHistory('project-merge').result,
                          'SUCCESS')
@@ -59,7 +54,11 @@
                          'SUCCESS')
         self.assertEqual(A.data['status'], 'MERGED')
         self.assertEqual(A.reported, 2)
+        self.assertEqual(self.getJobFromHistory('project-test1').node,
+                         'image1')
+        self.assertIsNone(self.getJobFromHistory('project-test2').node)
 
+        # TODOv3(jeblair): we may want to report stats by tenant (also?).
         self.assertReportedStat('gerrit.event.comment-added', value='1|c')
         self.assertReportedStat('zuul.pipeline.gate.current_changes',
                                 value='1|g')
@@ -80,101 +79,96 @@
 
     def test_initial_pipeline_gauges(self):
         "Test that each pipeline reported its length on start"
-        pipeline_names = self.sched.layout.pipelines.keys()
-        self.assertNotEqual(len(pipeline_names), 0)
-        for name in pipeline_names:
-            self.assertReportedStat('zuul.pipeline.%s.current_changes' % name,
-                                    value='0|g')
+        self.assertReportedStat('zuul.pipeline.gate.current_changes',
+                                value='0|g')
+        self.assertReportedStat('zuul.pipeline.check.current_changes',
+                                value='0|g')
 
-    def test_duplicate_pipelines(self):
-        "Test that a change matching multiple pipelines works"
-
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        self.fake_gerrit.addEvent(A.getChangeRestoredEvent())
+    def test_job_branch(self):
+        "Test the correct variant of a job runs on a branch"
+        self.create_branch('org/project', 'stable')
+        A = self.fake_gerrit.addFakeChange('org/project', 'stable', 'A')
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
-
-        self.assertEqual(len(self.history), 2)
-        self.history[0].name == 'project-test1'
-        self.history[1].name == 'project-test1'
-
-        self.assertEqual(len(A.messages), 2)
-        if 'dup1/project-test1' in A.messages[0]:
-            self.assertIn('dup1/project-test1', A.messages[0])
-            self.assertNotIn('dup2/project-test1', A.messages[0])
-            self.assertNotIn('dup1/project-test1', A.messages[1])
-            self.assertIn('dup2/project-test1', A.messages[1])
-        else:
-            self.assertIn('dup1/project-test1', A.messages[1])
-            self.assertNotIn('dup2/project-test1', A.messages[1])
-            self.assertNotIn('dup1/project-test1', A.messages[0])
-            self.assertIn('dup2/project-test1', A.messages[0])
+        self.assertEqual(self.getJobFromHistory('project-test1').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-test2').result,
+                         'SUCCESS')
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and success")
+        self.assertIn('gate', A.messages[1],
+                      "A should transit gate")
+        self.assertEqual(self.getJobFromHistory('project-test1').node,
+                         'image2')
 
     def test_parallel_changes(self):
         "Test that changes are tested in parallel and merged in series"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
         self.assertEqual(len(self.builds), 1)
         self.assertEqual(self.builds[0].name, 'project-merge')
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
+        self.assertTrue(self.builds[0].hasChanges(A))
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
         self.assertEqual(len(self.builds), 3)
         self.assertEqual(self.builds[0].name, 'project-test1')
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
+        self.assertTrue(self.builds[0].hasChanges(A))
         self.assertEqual(self.builds[1].name, 'project-test2')
-        self.assertTrue(self.job_has_changes(self.builds[1], A))
+        self.assertTrue(self.builds[1].hasChanges(A))
         self.assertEqual(self.builds[2].name, 'project-merge')
-        self.assertTrue(self.job_has_changes(self.builds[2], A, B))
+        self.assertTrue(self.builds[2].hasChanges(A, B))
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
         self.assertEqual(len(self.builds), 5)
         self.assertEqual(self.builds[0].name, 'project-test1')
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
+        self.assertTrue(self.builds[0].hasChanges(A))
         self.assertEqual(self.builds[1].name, 'project-test2')
-        self.assertTrue(self.job_has_changes(self.builds[1], A))
+        self.assertTrue(self.builds[1].hasChanges(A))
 
         self.assertEqual(self.builds[2].name, 'project-test1')
-        self.assertTrue(self.job_has_changes(self.builds[2], A, B))
+        self.assertTrue(self.builds[2].hasChanges(A, B))
         self.assertEqual(self.builds[3].name, 'project-test2')
-        self.assertTrue(self.job_has_changes(self.builds[3], A, B))
+        self.assertTrue(self.builds[3].hasChanges(A, B))
 
         self.assertEqual(self.builds[4].name, 'project-merge')
-        self.assertTrue(self.job_has_changes(self.builds[4], A, B, C))
+        self.assertTrue(self.builds[4].hasChanges(A, B, C))
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
         self.assertEqual(len(self.builds), 6)
         self.assertEqual(self.builds[0].name, 'project-test1')
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
+        self.assertTrue(self.builds[0].hasChanges(A))
         self.assertEqual(self.builds[1].name, 'project-test2')
-        self.assertTrue(self.job_has_changes(self.builds[1], A))
+        self.assertTrue(self.builds[1].hasChanges(A))
 
         self.assertEqual(self.builds[2].name, 'project-test1')
-        self.assertTrue(self.job_has_changes(self.builds[2], A, B))
+        self.assertTrue(self.builds[2].hasChanges(A, B))
         self.assertEqual(self.builds[3].name, 'project-test2')
-        self.assertTrue(self.job_has_changes(self.builds[3], A, B))
+        self.assertTrue(self.builds[3].hasChanges(A, B))
 
         self.assertEqual(self.builds[4].name, 'project-test1')
-        self.assertTrue(self.job_has_changes(self.builds[4], A, B, C))
+        self.assertTrue(self.builds[4].hasChanges(A, B, C))
         self.assertEqual(self.builds[5].name, 'project-test2')
-        self.assertTrue(self.job_has_changes(self.builds[5], A, B, C))
+        self.assertTrue(self.builds[5].hasChanges(A, B, C))
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
         self.assertEqual(len(self.builds), 0)
 
@@ -188,29 +182,58 @@
 
     def test_failed_changes(self):
         "Test that a change behind a failed change is retested"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
 
-        self.worker.addFailTest('project-test1', A)
+        self.launch_server.failJob('project-test1', A)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertBuilds([dict(name='project-merge', changes='1,1')])
+
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+        # A/project-merge is complete
+        self.assertBuilds([
+            dict(name='project-test1', changes='1,1'),
+            dict(name='project-test2', changes='1,1'),
+            dict(name='project-merge', changes='1,1 2,1'),
+        ])
+
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+        # A/project-merge is complete
+        # B/project-merge is complete
+        self.assertBuilds([
+            dict(name='project-test1', changes='1,1'),
+            dict(name='project-test2', changes='1,1'),
+            dict(name='project-test1', changes='1,1 2,1'),
+            dict(name='project-test2', changes='1,1 2,1'),
+        ])
+
+        # Release project-test1 for A which will fail.  This will
+        # abort both running B jobs and relaunch project-merge for B.
+        self.builds[0].release()
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
-        self.waitUntilSettled()
+        self.orderedRelease()
+        self.assertHistory([
+            dict(name='project-merge', result='SUCCESS', changes='1,1'),
+            dict(name='project-merge', result='SUCCESS', changes='1,1 2,1'),
+            dict(name='project-test1', result='FAILURE', changes='1,1'),
+            dict(name='project-test1', result='ABORTED', changes='1,1 2,1'),
+            dict(name='project-test2', result='ABORTED', changes='1,1 2,1'),
+            dict(name='project-test2', result='SUCCESS', changes='1,1'),
+            dict(name='project-merge', result='SUCCESS', changes='2,1'),
+            dict(name='project-test1', result='SUCCESS', changes='2,1'),
+            dict(name='project-test2', result='SUCCESS', changes='2,1'),
+        ], ordered=False)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
-
-        self.waitUntilSettled()
-        # It's certain that the merge job for change 2 will run, but
-        # the test1 and test2 jobs may or may not run.
-        self.assertTrue(len(self.history) > 6)
         self.assertEqual(A.data['status'], 'NEW')
         self.assertEqual(B.data['status'], 'MERGED')
         self.assertEqual(A.reported, 2)
@@ -219,43 +242,67 @@
     def test_independent_queues(self):
         "Test that changes end up in the right queues"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
         # There should be one merge job at the head of each queue running
-        self.assertEqual(len(self.builds), 2)
-        self.assertEqual(self.builds[0].name, 'project-merge')
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
-        self.assertEqual(self.builds[1].name, 'project1-merge')
-        self.assertTrue(self.job_has_changes(self.builds[1], B))
+        self.assertBuilds([
+            dict(name='project-merge', changes='1,1'),
+            dict(name='project-merge', changes='2,1'),
+        ])
 
         # Release the current merge builds
-        self.worker.release('.*-merge')
+        self.builds[0].release()
+        self.waitUntilSettled()
+        self.builds[0].release()
         self.waitUntilSettled()
         # Release the merge job for project2 which is behind project1
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # All the test builds should be running:
-        # project1 (3) + project2 (3) + project (2) = 8
-        self.assertEqual(len(self.builds), 8)
+        self.assertBuilds([
+            dict(name='project-test1', changes='1,1'),
+            dict(name='project-test2', changes='1,1'),
+            dict(name='project-test1', changes='2,1'),
+            dict(name='project-test2', changes='2,1'),
+            dict(name='project1-project2-integration', changes='2,1'),
+            dict(name='project-test1', changes='2,1 3,1'),
+            dict(name='project-test2', changes='2,1 3,1'),
+            dict(name='project1-project2-integration', changes='2,1 3,1'),
+        ])
 
-        self.worker.release()
-        self.waitUntilSettled()
-        self.assertEqual(len(self.builds), 0)
+        self.orderedRelease()
+        self.assertHistory([
+            dict(name='project-merge', result='SUCCESS', changes='1,1'),
+            dict(name='project-merge', result='SUCCESS', changes='2,1'),
+            dict(name='project-merge', result='SUCCESS', changes='2,1 3,1'),
+            dict(name='project-test1', result='SUCCESS', changes='1,1'),
+            dict(name='project-test2', result='SUCCESS', changes='1,1'),
+            dict(name='project-test1', result='SUCCESS', changes='2,1'),
+            dict(name='project-test2', result='SUCCESS', changes='2,1'),
+            dict(
+                name='project1-project2-integration',
+                result='SUCCESS',
+                changes='2,1'),
+            dict(name='project-test1', result='SUCCESS', changes='2,1 3,1'),
+            dict(name='project-test2', result='SUCCESS', changes='2,1 3,1'),
+            dict(name='project1-project2-integration',
+                 result='SUCCESS',
+                 changes='2,1 3,1'),
+        ])
 
-        self.assertEqual(len(self.history), 11)
         self.assertEqual(A.data['status'], 'MERGED')
         self.assertEqual(B.data['status'], 'MERGED')
         self.assertEqual(C.data['status'], 'MERGED')
@@ -266,54 +313,111 @@
     def test_failed_change_at_head(self):
         "Test that if a change at the head fails, jobs behind it are canceled"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.worker.addFailTest('project-test1', A)
+        self.launch_server.failJob('project-test1', A)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
-        self.assertEqual(len(self.builds), 1)
-        self.assertEqual(self.builds[0].name, 'project-merge')
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
+        self.assertBuilds([
+            dict(name='project-merge', changes='1,1'),
+        ])
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
-        self.assertEqual(len(self.builds), 6)
-        self.assertEqual(self.builds[0].name, 'project-test1')
-        self.assertEqual(self.builds[1].name, 'project-test2')
-        self.assertEqual(self.builds[2].name, 'project-test1')
-        self.assertEqual(self.builds[3].name, 'project-test2')
-        self.assertEqual(self.builds[4].name, 'project-test1')
-        self.assertEqual(self.builds[5].name, 'project-test2')
+        self.assertBuilds([
+            dict(name='project-test1', changes='1,1'),
+            dict(name='project-test2', changes='1,1'),
+            dict(name='project-test1', changes='1,1 2,1'),
+            dict(name='project-test2', changes='1,1 2,1'),
+            dict(name='project-test1', changes='1,1 2,1 3,1'),
+            dict(name='project-test2', changes='1,1 2,1 3,1'),
+        ])
 
         self.release(self.builds[0])
         self.waitUntilSettled()
 
         # project-test2, project-merge for B
-        self.assertEqual(len(self.builds), 2)
-        self.assertEqual(self.countJobResults(self.history, 'ABORTED'), 4)
+        self.assertBuilds([
+            dict(name='project-test2', changes='1,1'),
+            dict(name='project-merge', changes='2,1'),
+        ])
+        # Unordered history comparison because the aborts can finish
+        # in any order.
+        self.assertHistory([
+            dict(name='project-merge', result='SUCCESS',
+                 changes='1,1'),
+            dict(name='project-merge', result='SUCCESS',
+                 changes='1,1 2,1'),
+            dict(name='project-merge', result='SUCCESS',
+                 changes='1,1 2,1 3,1'),
+            dict(name='project-test1', result='FAILURE',
+                 changes='1,1'),
+            dict(name='project-test1', result='ABORTED',
+                 changes='1,1 2,1'),
+            dict(name='project-test2', result='ABORTED',
+                 changes='1,1 2,1'),
+            dict(name='project-test1', result='ABORTED',
+                 changes='1,1 2,1 3,1'),
+            dict(name='project-test2', result='ABORTED',
+                 changes='1,1 2,1 3,1'),
+        ], ordered=False)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+        self.orderedRelease()
 
-        self.assertEqual(len(self.builds), 0)
-        self.assertEqual(len(self.history), 15)
+        self.assertBuilds([])
+        self.assertHistory([
+            dict(name='project-merge', result='SUCCESS',
+                 changes='1,1'),
+            dict(name='project-merge', result='SUCCESS',
+                 changes='1,1 2,1'),
+            dict(name='project-merge', result='SUCCESS',
+                 changes='1,1 2,1 3,1'),
+            dict(name='project-test1', result='FAILURE',
+                 changes='1,1'),
+            dict(name='project-test1', result='ABORTED',
+                 changes='1,1 2,1'),
+            dict(name='project-test2', result='ABORTED',
+                 changes='1,1 2,1'),
+            dict(name='project-test1', result='ABORTED',
+                 changes='1,1 2,1 3,1'),
+            dict(name='project-test2', result='ABORTED',
+                 changes='1,1 2,1 3,1'),
+            dict(name='project-merge', result='SUCCESS',
+                 changes='2,1'),
+            dict(name='project-merge', result='SUCCESS',
+                 changes='2,1 3,1'),
+            dict(name='project-test2', result='SUCCESS',
+                 changes='1,1'),
+            dict(name='project-test1', result='SUCCESS',
+                 changes='2,1'),
+            dict(name='project-test2', result='SUCCESS',
+                 changes='2,1'),
+            dict(name='project-test1', result='SUCCESS',
+                 changes='2,1 3,1'),
+            dict(name='project-test2', result='SUCCESS',
+                 changes='2,1 3,1'),
+        ], ordered=False)
+
         self.assertEqual(A.data['status'], 'NEW')
         self.assertEqual(B.data['status'], 'MERGED')
         self.assertEqual(C.data['status'], 'MERGED')
@@ -324,27 +428,27 @@
     def test_failed_change_in_middle(self):
         "Test a failed change in the middle of the queue"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.worker.addFailTest('project-test1', B)
+        self.launch_server.failJob('project-test1', B)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 6)
@@ -364,7 +468,7 @@
         self.assertEqual(len(self.builds), 4)
         self.assertEqual(self.countJobResults(self.history, 'ABORTED'), 2)
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # project-test1 and project-test2 for A
@@ -372,7 +476,8 @@
         # project-test1 and project-test2 for C
         self.assertEqual(len(self.builds), 5)
 
-        items = self.sched.layout.pipelines['gate'].getAllItems()
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        items = tenant.layout.pipelines['gate'].getAllItems()
         builds = items[0].current_build_set.getBuilds()
         self.assertEqual(self.countJobResults(builds, 'SUCCESS'), 1)
         self.assertEqual(self.countJobResults(builds, None), 2)
@@ -384,8 +489,8 @@
         self.assertEqual(self.countJobResults(builds, 'SUCCESS'), 1)
         self.assertEqual(self.countJobResults(builds, None), 2)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 0)
@@ -404,22 +509,24 @@
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.worker.addFailTest('project-test1', A)
+        self.launch_server.failJob('project-test1', A)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
         queue = self.gearman_server.getQueue()
         self.assertEqual(len(self.builds), 0)
         self.assertEqual(len(queue), 1)
-        self.assertEqual(queue[0].name, 'build:project-merge')
-        self.assertTrue(self.job_has_changes(queue[0], A))
+        self.assertEqual(queue[0].name, 'launcher:launch')
+        job_args = json.loads(queue[0].arguments)
+        self.assertEqual(job_args['job'], 'project-merge')
+        self.assertEqual(job_args['items'][0]['number'], '%d' % A.number)
 
         self.gearman_server.release('.*-merge')
         self.waitUntilSettled()
@@ -431,12 +538,19 @@
 
         self.assertEqual(len(self.builds), 0)
         self.assertEqual(len(queue), 6)
-        self.assertEqual(queue[0].name, 'build:project-test1')
-        self.assertEqual(queue[1].name, 'build:project-test2')
-        self.assertEqual(queue[2].name, 'build:project-test1')
-        self.assertEqual(queue[3].name, 'build:project-test2')
-        self.assertEqual(queue[4].name, 'build:project-test1')
-        self.assertEqual(queue[5].name, 'build:project-test2')
+
+        self.assertEqual(
+            json.loads(queue[0].arguments)['job'], 'project-test1')
+        self.assertEqual(
+            json.loads(queue[1].arguments)['job'], 'project-test2')
+        self.assertEqual(
+            json.loads(queue[2].arguments)['job'], 'project-test1')
+        self.assertEqual(
+            json.loads(queue[3].arguments)['job'], 'project-test2')
+        self.assertEqual(
+            json.loads(queue[4].arguments)['job'], 'project-test1')
+        self.assertEqual(
+            json.loads(queue[5].arguments)['job'], 'project-test2')
 
         self.release(queue[0])
         self.waitUntilSettled()
@@ -459,11 +573,12 @@
         self.assertEqual(B.reported, 2)
         self.assertEqual(C.reported, 2)
 
+    @skip("Disabled for early v3 development")
     def _test_time_database(self, iteration):
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         time.sleep(2)
 
@@ -489,10 +604,11 @@
             self.assertTrue(found_job['estimated_time'] >= 2)
             self.assertIsNotNone(found_job['remaining_time'])
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
+    @skip("Disabled for early v3 development")
     def test_time_database(self):
         "Test the time database"
 
@@ -502,27 +618,27 @@
     def test_two_failed_changes_at_head(self):
         "Test that changes are reparented correctly if 2 fail at head"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.worker.addFailTest('project-test1', A)
-        self.worker.addFailTest('project-test1', B)
+        self.launch_server.failJob('project-test1', A)
+        self.launch_server.failJob('project-test1', B)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 6)
@@ -533,19 +649,19 @@
         self.assertEqual(self.builds[4].name, 'project-test1')
         self.assertEqual(self.builds[5].name, 'project-test2')
 
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
-        self.assertTrue(self.job_has_changes(self.builds[2], A))
-        self.assertTrue(self.job_has_changes(self.builds[2], B))
-        self.assertTrue(self.job_has_changes(self.builds[4], A))
-        self.assertTrue(self.job_has_changes(self.builds[4], B))
-        self.assertTrue(self.job_has_changes(self.builds[4], C))
+        self.assertTrue(self.builds[0].hasChanges(A))
+        self.assertTrue(self.builds[2].hasChanges(A))
+        self.assertTrue(self.builds[2].hasChanges(B))
+        self.assertTrue(self.builds[4].hasChanges(A))
+        self.assertTrue(self.builds[4].hasChanges(B))
+        self.assertTrue(self.builds[4].hasChanges(C))
 
         # Fail change B first
         self.release(self.builds[2])
         self.waitUntilSettled()
 
         # restart of C after B failure
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 5)
@@ -555,12 +671,12 @@
         self.assertEqual(self.builds[3].name, 'project-test1')
         self.assertEqual(self.builds[4].name, 'project-test2')
 
-        self.assertTrue(self.job_has_changes(self.builds[1], A))
-        self.assertTrue(self.job_has_changes(self.builds[2], A))
-        self.assertTrue(self.job_has_changes(self.builds[2], B))
-        self.assertTrue(self.job_has_changes(self.builds[4], A))
-        self.assertFalse(self.job_has_changes(self.builds[4], B))
-        self.assertTrue(self.job_has_changes(self.builds[4], C))
+        self.assertTrue(self.builds[1].hasChanges(A))
+        self.assertTrue(self.builds[2].hasChanges(A))
+        self.assertTrue(self.builds[2].hasChanges(B))
+        self.assertTrue(self.builds[4].hasChanges(A))
+        self.assertFalse(self.builds[4].hasChanges(B))
+        self.assertTrue(self.builds[4].hasChanges(C))
 
         # Finish running all passing jobs for change A
         self.release(self.builds[1])
@@ -570,9 +686,9 @@
         self.waitUntilSettled()
 
         # restart of B,C after A failure
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 4)
@@ -581,18 +697,18 @@
         self.assertEqual(self.builds[2].name, 'project-test1')  # C
         self.assertEqual(self.builds[3].name, 'project-test2')  # C
 
-        self.assertFalse(self.job_has_changes(self.builds[1], A))
-        self.assertTrue(self.job_has_changes(self.builds[1], B))
-        self.assertFalse(self.job_has_changes(self.builds[1], C))
+        self.assertFalse(self.builds[1].hasChanges(A))
+        self.assertTrue(self.builds[1].hasChanges(B))
+        self.assertFalse(self.builds[1].hasChanges(C))
 
-        self.assertFalse(self.job_has_changes(self.builds[2], A))
+        self.assertFalse(self.builds[2].hasChanges(A))
         # After A failed and B and C restarted, B should be back in
         # C's tests because it has not failed yet.
-        self.assertTrue(self.job_has_changes(self.builds[2], B))
-        self.assertTrue(self.job_has_changes(self.builds[2], C))
+        self.assertTrue(self.builds[2].hasChanges(B))
+        self.assertTrue(self.builds[2].hasChanges(C))
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 0)
@@ -604,44 +720,14 @@
         self.assertEqual(B.reported, 2)
         self.assertEqual(C.reported, 2)
 
-    def test_parse_skip_if(self):
-        job_yaml = """
-jobs:
-  - name: job_name
-    skip-if:
-      - project: ^project_name$
-        branch: ^stable/icehouse$
-        all-files-match-any:
-          - ^filename$
-      - project: ^project2_name$
-        all-files-match-any:
-          - ^filename2$
-    """.strip()
-        data = yaml.load(job_yaml)
-        config_job = data.get('jobs')[0]
-        cm = zuul.change_matcher
-        expected = cm.MatchAny([
-            cm.MatchAll([
-                cm.ProjectMatcher('^project_name$'),
-                cm.BranchMatcher('^stable/icehouse$'),
-                cm.MatchAllFiles([cm.FileMatcher('^filename$')]),
-            ]),
-            cm.MatchAll([
-                cm.ProjectMatcher('^project2_name$'),
-                cm.MatchAllFiles([cm.FileMatcher('^filename2$')]),
-            ]),
-        ])
-        matcher = self.sched._parseSkipIf(config_job)
-        self.assertEqual(expected, matcher)
-
     def test_patch_order(self):
         "Test that dependent patches are tested in the right order"
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
         M2 = self.fake_gerrit.addFakeChange('org/project', 'master', 'M2')
         M1 = self.fake_gerrit.addFakeChange('org/project', 'master', 'M1')
@@ -658,7 +744,7 @@
         A.setDependsOn(M1, 1)
         M1.setDependsOn(M2, 1)
 
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
@@ -666,8 +752,8 @@
         self.assertEqual(B.data['status'], 'NEW')
         self.assertEqual(C.data['status'], 'NEW')
 
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
 
         self.waitUntilSettled()
         self.assertEqual(M2.queried, 0)
@@ -702,14 +788,14 @@
         F.setDependsOn(B, 1)
         G.setDependsOn(A, 1)
 
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        D.addApproval('CRVW', 2)
-        E.addApproval('CRVW', 2)
-        F.addApproval('CRVW', 2)
-        G.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        D.addApproval('code-review', 2)
+        E.addApproval('code-review', 2)
+        F.addApproval('code-review', 2)
+        G.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
@@ -727,23 +813,23 @@
         # triggering events.  Since it will have the changes cached
         # already (without approvals), we need to clear the cache
         # first.
-        for connection in self.connections.values():
+        for connection in self.connections.connections.values():
             connection.maintainCache([])
 
-        self.worker.hold_jobs_in_build = True
-        A.addApproval('APRV', 1)
-        B.addApproval('APRV', 1)
-        D.addApproval('APRV', 1)
-        E.addApproval('APRV', 1)
-        F.addApproval('APRV', 1)
-        G.addApproval('APRV', 1)
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.launch_server.hold_jobs_in_build = True
+        A.addApproval('approved', 1)
+        B.addApproval('approved', 1)
+        D.addApproval('approved', 1)
+        E.addApproval('approved', 1)
+        F.addApproval('approved', 1)
+        G.addApproval('approved', 1)
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         for x in range(8):
-            self.worker.release('.*-merge')
+            self.launch_server.release('.*-merge')
             self.waitUntilSettled()
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -765,13 +851,13 @@
 
     def test_source_cache(self):
         "Test that the source cache operates correctly"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         X = self.fake_gerrit.addFakeChange('org/project', 'master', 'X')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
 
         M1 = self.fake_gerrit.addFakeChange('org/project', 'master', 'M1')
         M1.setMerged()
@@ -779,7 +865,7 @@
         B.setDependsOn(A, 1)
         A.setDependsOn(M1, 1)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.fake_gerrit.addEvent(X.getPatchsetCreatedEvent(1))
 
         self.waitUntilSettled()
@@ -793,15 +879,15 @@
                 build.release()
         self.waitUntilSettled()
 
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.log.debug("len %s" % self.fake_gerrit._change_cache.keys())
         # there should still be changes in the cache
         self.assertNotEqual(len(self.fake_gerrit._change_cache.keys()), 0)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -813,53 +899,29 @@
         "Test whether a change is ready to merge"
         # TODO: move to test_gerrit (this is a unit test!)
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        source = self.sched.layout.pipelines['gate'].source
-        a = source._getChange(1, 2)
-        mgr = self.sched.layout.pipelines['gate'].manager
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        source = tenant.layout.pipelines['gate'].source
+
+        # TODO(pabelanger): As we add more source / trigger APIs we should make
+        # it easier for users to create events for testing.
+        event = zuul.model.TriggerEvent()
+        event.trigger_name = 'gerrit'
+        event.change_number = '1'
+        event.patch_number = '2'
+
+        a = source.getChange(event)
+        mgr = tenant.layout.pipelines['gate'].manager
         self.assertFalse(source.canMerge(a, mgr.getSubmitAllowNeeds()))
 
-        A.addApproval('CRVW', 2)
-        a = source._getChange(1, 2, refresh=True)
+        A.addApproval('code-review', 2)
+        a = source.getChange(event, refresh=True)
         self.assertFalse(source.canMerge(a, mgr.getSubmitAllowNeeds()))
 
-        A.addApproval('APRV', 1)
-        a = source._getChange(1, 2, refresh=True)
+        A.addApproval('approved', 1)
+        a = source.getChange(event, refresh=True)
         self.assertTrue(source.canMerge(a, mgr.getSubmitAllowNeeds()))
 
-    def test_build_configuration(self):
-        "Test that zuul merges the right commits for testing"
-
-        self.gearman_server.hold_jobs_in_queue = True
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.waitUntilSettled()
-
-        self.gearman_server.release('.*-merge')
-        self.waitUntilSettled()
-        self.gearman_server.release('.*-merge')
-        self.waitUntilSettled()
-        self.gearman_server.release('.*-merge')
-        self.waitUntilSettled()
-        queue = self.gearman_server.getQueue()
-        ref = self.getParameter(queue[-1], 'ZUUL_REF')
-        self.gearman_server.hold_jobs_in_queue = False
-        self.gearman_server.release()
-        self.waitUntilSettled()
-
-        path = os.path.join(self.git_root, "org/project")
-        repo = git.Repo(path)
-        repo_messages = [c.message.strip() for c in repo.iter_commits(ref)]
-        repo_messages.reverse()
-        correct_messages = ['initial commit', 'A-1', 'B-1', 'C-1']
-        self.assertEqual(repo_messages, correct_messages)
-
+    @skip("Disabled for early v3 development")
     def test_build_configuration_conflict(self):
         "Test that merge conflicts are handled"
 
@@ -872,12 +934,12 @@
         B.addPatchset(['conflict'])
         C = self.fake_gerrit.addFakeChange('org/conflict-project',
                                            'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(A.reported, 1)
@@ -952,8 +1014,7 @@
     def test_post_ignore_deletes_negative(self):
         "Test that deleting refs does trigger post jobs"
 
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-dont-ignore-deletes.yaml')
+        self.updateConfigLayout('layout-dont-ignore-ref-deletes')
         self.sched.reconfigure(self.config)
 
         e = {
@@ -975,40 +1036,7 @@
         self.assertEqual(len(self.history), 1)
         self.assertIn('project-post', job_names)
 
-    def test_build_configuration_branch(self):
-        "Test that the right commits are on alternate branches"
-
-        self.gearman_server.hold_jobs_in_queue = True
-        A = self.fake_gerrit.addFakeChange('org/project', 'mp', 'A')
-        B = self.fake_gerrit.addFakeChange('org/project', 'mp', 'B')
-        C = self.fake_gerrit.addFakeChange('org/project', 'mp', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.waitUntilSettled()
-
-        self.gearman_server.release('.*-merge')
-        self.waitUntilSettled()
-        self.gearman_server.release('.*-merge')
-        self.waitUntilSettled()
-        self.gearman_server.release('.*-merge')
-        self.waitUntilSettled()
-        queue = self.gearman_server.getQueue()
-        ref = self.getParameter(queue[-1], 'ZUUL_REF')
-        self.gearman_server.hold_jobs_in_queue = False
-        self.gearman_server.release()
-        self.waitUntilSettled()
-
-        path = os.path.join(self.git_root, "org/project")
-        repo = git.Repo(path)
-        repo_messages = [c.message.strip() for c in repo.iter_commits(ref)]
-        repo_messages.reverse()
-        correct_messages = ['initial commit', 'mp commit', 'A-1', 'B-1', 'C-1']
-        self.assertEqual(repo_messages, correct_messages)
-
+    @skip("Disabled for early v3 development")
     def test_build_configuration_branch_interaction(self):
         "Test that switching between branches works"
         self.test_build_configuration()
@@ -1019,148 +1047,15 @@
         repo.heads.master.commit = repo.commit('init')
         self.test_build_configuration()
 
-    def test_build_configuration_multi_branch(self):
-        "Test that dependent changes on multiple branches are merged"
-
-        self.gearman_server.hold_jobs_in_queue = True
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        B = self.fake_gerrit.addFakeChange('org/project', 'mp', 'B')
-        C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.waitUntilSettled()
-        queue = self.gearman_server.getQueue()
-        job_A = None
-        for job in queue:
-            if 'project-merge' in job.name:
-                job_A = job
-        ref_A = self.getParameter(job_A, 'ZUUL_REF')
-        commit_A = self.getParameter(job_A, 'ZUUL_COMMIT')
-        self.log.debug("Got Zuul ref for change A: %s" % ref_A)
-        self.log.debug("Got Zuul commit for change A: %s" % commit_A)
-
-        self.gearman_server.release('.*-merge')
-        self.waitUntilSettled()
-        queue = self.gearman_server.getQueue()
-        job_B = None
-        for job in queue:
-            if 'project-merge' in job.name:
-                job_B = job
-        ref_B = self.getParameter(job_B, 'ZUUL_REF')
-        commit_B = self.getParameter(job_B, 'ZUUL_COMMIT')
-        self.log.debug("Got Zuul ref for change B: %s" % ref_B)
-        self.log.debug("Got Zuul commit for change B: %s" % commit_B)
-
-        self.gearman_server.release('.*-merge')
-        self.waitUntilSettled()
-        queue = self.gearman_server.getQueue()
-        for job in queue:
-            if 'project-merge' in job.name:
-                job_C = job
-        ref_C = self.getParameter(job_C, 'ZUUL_REF')
-        commit_C = self.getParameter(job_C, 'ZUUL_COMMIT')
-        self.log.debug("Got Zuul ref for change C: %s" % ref_C)
-        self.log.debug("Got Zuul commit for change C: %s" % commit_C)
-        self.gearman_server.hold_jobs_in_queue = False
-        self.gearman_server.release()
-        self.waitUntilSettled()
-
-        path = os.path.join(self.git_root, "org/project")
-        repo = git.Repo(path)
-
-        repo_messages = [c.message.strip()
-                         for c in repo.iter_commits(ref_C)]
-        repo_shas = [c.hexsha for c in repo.iter_commits(ref_C)]
-        repo_messages.reverse()
-        correct_messages = ['initial commit', 'A-1', 'C-1']
-        # Ensure the right commits are in the history for this ref
-        self.assertEqual(repo_messages, correct_messages)
-        # Ensure ZUUL_REF -> ZUUL_COMMIT
-        self.assertEqual(repo_shas[0], commit_C)
-
-        repo_messages = [c.message.strip()
-                         for c in repo.iter_commits(ref_B)]
-        repo_shas = [c.hexsha for c in repo.iter_commits(ref_B)]
-        repo_messages.reverse()
-        correct_messages = ['initial commit', 'mp commit', 'B-1']
-        self.assertEqual(repo_messages, correct_messages)
-        self.assertEqual(repo_shas[0], commit_B)
-
-        repo_messages = [c.message.strip()
-                         for c in repo.iter_commits(ref_A)]
-        repo_shas = [c.hexsha for c in repo.iter_commits(ref_A)]
-        repo_messages.reverse()
-        correct_messages = ['initial commit', 'A-1']
-        self.assertEqual(repo_messages, correct_messages)
-        self.assertEqual(repo_shas[0], commit_A)
-
-        self.assertNotEqual(ref_A, ref_B, ref_C)
-        self.assertNotEqual(commit_A, commit_B, commit_C)
-
-    def test_one_job_project(self):
-        "Test that queueing works with one job"
-        A = self.fake_gerrit.addFakeChange('org/one-job-project',
-                                           'master', 'A')
-        B = self.fake_gerrit.addFakeChange('org/one-job-project',
-                                           'master', 'B')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.waitUntilSettled()
-
-        self.assertEqual(A.data['status'], 'MERGED')
-        self.assertEqual(A.reported, 2)
-        self.assertEqual(B.data['status'], 'MERGED')
-        self.assertEqual(B.reported, 2)
-
-    def test_job_from_templates_launched(self):
-        "Test whether a job generated via a template can be launched"
-
-        A = self.fake_gerrit.addFakeChange(
-            'org/templated-project', 'master', 'A')
-        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
-        self.waitUntilSettled()
-
-        self.assertEqual(self.getJobFromHistory('project-test1').result,
-                         'SUCCESS')
-        self.assertEqual(self.getJobFromHistory('project-test2').result,
-                         'SUCCESS')
-
-    def test_layered_templates(self):
-        "Test whether a job generated via a template can be launched"
-
-        A = self.fake_gerrit.addFakeChange(
-            'org/layered-project', 'master', 'A')
-        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
-        self.waitUntilSettled()
-
-        self.assertEqual(self.getJobFromHistory('project-test1').result,
-                         'SUCCESS')
-        self.assertEqual(self.getJobFromHistory('project-test2').result,
-                         'SUCCESS')
-        self.assertEqual(self.getJobFromHistory('layered-project-test3'
-                                                ).result, 'SUCCESS')
-        self.assertEqual(self.getJobFromHistory('layered-project-test4'
-                                                ).result, 'SUCCESS')
-        self.assertEqual(self.getJobFromHistory('layered-project-foo-test5'
-                                                ).result, 'SUCCESS')
-        self.assertEqual(self.getJobFromHistory('project-test6').result,
-                         'SUCCESS')
-
     def test_dependent_changes_dequeue(self):
         "Test that dependent patches are not needlessly tested"
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
         M1 = self.fake_gerrit.addFakeChange('org/project', 'master', 'M1')
         M1.setMerged()
@@ -1171,11 +1066,11 @@
         B.setDependsOn(A, 1)
         A.setDependsOn(M1, 1)
 
-        self.worker.addFailTest('project-merge', A)
+        self.launch_server.failJob('project-merge', A)
 
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
@@ -1189,50 +1084,50 @@
 
     def test_failing_dependent_changes(self):
         "Test that failing dependent patches are taken out of stream"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
         D = self.fake_gerrit.addFakeChange('org/project', 'master', 'D')
         E = self.fake_gerrit.addFakeChange('org/project', 'master', 'E')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        D.addApproval('CRVW', 2)
-        E.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        D.addApproval('code-review', 2)
+        E.addApproval('code-review', 2)
 
         # E, D -> C -> B, A
 
         D.setDependsOn(C, 1)
         C.setDependsOn(B, 1)
 
-        self.worker.addFailTest('project-test1', B)
+        self.launch_server.failJob('project-test1', B)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(D.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(E.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(D.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(E.addApproval('approved', 1))
 
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
+        self.launch_server.hold_jobs_in_build = False
         for build in self.builds:
             if build.parameters['ZUUL_CHANGE'] != '1':
                 build.release()
                 self.waitUntilSettled()
 
-        self.worker.release()
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -1257,44 +1152,44 @@
         # If it's dequeued more than once, we should see extra
         # aborted jobs.
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project1', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.worker.addFailTest('project1-test1', A)
-        self.worker.addFailTest('project1-test2', A)
-        self.worker.addFailTest('project1-project2-integration', A)
+        self.launch_server.failJob('project-test1', A)
+        self.launch_server.failJob('project-test2', A)
+        self.launch_server.failJob('project1-project2-integration', A)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 1)
-        self.assertEqual(self.builds[0].name, 'project1-merge')
-        self.assertTrue(self.job_has_changes(self.builds[0], A))
+        self.assertEqual(self.builds[0].name, 'project-merge')
+        self.assertTrue(self.builds[0].hasChanges(A))
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 9)
-        self.assertEqual(self.builds[0].name, 'project1-test1')
-        self.assertEqual(self.builds[1].name, 'project1-test2')
+        self.assertEqual(self.builds[0].name, 'project-test1')
+        self.assertEqual(self.builds[1].name, 'project-test2')
         self.assertEqual(self.builds[2].name, 'project1-project2-integration')
-        self.assertEqual(self.builds[3].name, 'project1-test1')
-        self.assertEqual(self.builds[4].name, 'project1-test2')
+        self.assertEqual(self.builds[3].name, 'project-test1')
+        self.assertEqual(self.builds[4].name, 'project-test2')
         self.assertEqual(self.builds[5].name, 'project1-project2-integration')
-        self.assertEqual(self.builds[6].name, 'project1-test1')
-        self.assertEqual(self.builds[7].name, 'project1-test2')
+        self.assertEqual(self.builds[6].name, 'project-test1')
+        self.assertEqual(self.builds[7].name, 'project-test2')
         self.assertEqual(self.builds[8].name, 'project1-project2-integration')
 
         self.release(self.builds[0])
@@ -1303,8 +1198,8 @@
         self.assertEqual(len(self.builds), 3)  # test2,integration, merge for B
         self.assertEqual(self.countJobResults(self.history, 'ABORTED'), 6)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 0)
@@ -1322,9 +1217,9 @@
 
         A = self.fake_gerrit.addFakeChange('org/nonvoting-project',
                                            'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.worker.addFailTest('nonvoting-project-test2', A)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.launch_server.failJob('nonvoting-project-test2', A)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
@@ -1364,7 +1259,7 @@
         "Test failed check queue jobs."
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        self.worker.addFailTest('project-test2', A)
+        self.launch_server.failJob('project-test2', A)
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
 
         self.waitUntilSettled()
@@ -1379,11 +1274,14 @@
                          'FAILURE')
 
     def test_dependent_behind_dequeue(self):
+        # This particular test does a large amount of merges and needs a little
+        # more time to complete
+        self.wait_timeout = 90
         "test that dependent changes behind dequeued changes work"
         # This complicated test is a reproduction of a real life bug
         self.sched.reconfigure(self.config)
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C')
@@ -1392,44 +1290,44 @@
         F = self.fake_gerrit.addFakeChange('org/project3', 'master', 'F')
         D.setDependsOn(C, 1)
         E.setDependsOn(D, 1)
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        D.addApproval('CRVW', 2)
-        E.addApproval('CRVW', 2)
-        F.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        D.addApproval('code-review', 2)
+        E.addApproval('code-review', 2)
+        F.addApproval('code-review', 2)
 
         A.fail_merge = True
 
         # Change object re-use in the gerrit trigger is hidden if
         # changes are added in quick succession; waiting makes it more
         # like real life.
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
-        self.fake_gerrit.addEvent(D.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(D.addApproval('approved', 1))
         self.waitUntilSettled()
-        self.fake_gerrit.addEvent(E.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(E.addApproval('approved', 1))
         self.waitUntilSettled()
-        self.fake_gerrit.addEvent(F.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(F.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # all jobs running
@@ -1443,8 +1341,8 @@
         c.release()
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
@@ -1468,8 +1366,8 @@
         "Test that the merger works after a repack"
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(self.getJobFromHistory('project-merge').result,
                          'SUCCESS')
@@ -1480,14 +1378,18 @@
         self.assertEqual(A.data['status'], 'MERGED')
         self.assertEqual(A.reported, 2)
         self.assertEmptyQueues()
-        self.worker.build_history = []
+        self.build_history = []
 
-        path = os.path.join(self.git_root, "org/project")
-        print(repack_repo(path))
+        path = os.path.join(self.merger_src_root, "org/project")
+        if os.path.exists(path):
+            repack_repo(path)
+        path = os.path.join(self.launcher_src_root, "org/project")
+        if os.path.exists(path):
+            repack_repo(path)
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(self.getJobFromHistory('project-merge').result,
                          'SUCCESS')
@@ -1502,51 +1404,23 @@
         "Test that the merger works with large changes after a repack"
         # https://bugs.launchpad.net/zuul/+bug/1078946
         # This test assumes the repo is already cloned; make sure it is
+        tenant = self.sched.abide.tenants.get('tenant-one')
         url = self.fake_gerrit.getGitUrl(
-            self.sched.layout.projects['org/project1'])
+            tenant.layout.project_configs.get('org/project1'))
         self.merge_server.merger.addProject('org/project1', url)
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         A.addPatchset(large=True)
         path = os.path.join(self.upstream_root, "org/project1")
-        print(repack_repo(path))
-        path = os.path.join(self.git_root, "org/project1")
-        print(repack_repo(path))
+        repack_repo(path)
+        path = os.path.join(self.merger_src_root, "org/project1")
+        if os.path.exists(path):
+            repack_repo(path)
+        path = os.path.join(self.launcher_src_root, "org/project1")
+        if os.path.exists(path):
+            repack_repo(path)
 
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.waitUntilSettled()
-        self.assertEqual(self.getJobFromHistory('project1-merge').result,
-                         'SUCCESS')
-        self.assertEqual(self.getJobFromHistory('project1-test1').result,
-                         'SUCCESS')
-        self.assertEqual(self.getJobFromHistory('project1-test2').result,
-                         'SUCCESS')
-        self.assertEqual(A.data['status'], 'MERGED')
-        self.assertEqual(A.reported, 2)
-
-    def test_nonexistent_job(self):
-        "Test launching a job that doesn't exist"
-        # Set to the state immediately after a restart
-        self.resetGearmanServer()
-        self.launcher.negative_function_cache_ttl = 0
-
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        # There may be a thread about to report a lost change
-        while A.reported < 2:
-            self.waitUntilSettled()
-        job_names = [x.name for x in self.history]
-        self.assertFalse(job_names)
-        self.assertEqual(A.data['status'], 'NEW')
-        self.assertEqual(A.reported, 2)
-        self.assertEmptyQueues()
-
-        # Make sure things still work:
-        self.registerJobs()
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(self.getJobFromHistory('project-merge').result,
                          'SUCCESS')
@@ -1557,33 +1431,10 @@
         self.assertEqual(A.data['status'], 'MERGED')
         self.assertEqual(A.reported, 2)
 
-    def test_single_nonexistent_post_job(self):
-        "Test launching a single post job that doesn't exist"
-        e = {
-            "type": "ref-updated",
-            "submitter": {
-                "name": "User Name",
-            },
-            "refUpdate": {
-                "oldRev": "90f173846e3af9154517b88543ffbd1691f31366",
-                "newRev": "d479a0bfcb34da57a31adb2a595c0cf687812543",
-                "refName": "master",
-                "project": "org/project",
-            }
-        }
-        # Set to the state immediately after a restart
-        self.resetGearmanServer()
-        self.launcher.negative_function_cache_ttl = 0
-
-        self.fake_gerrit.addEvent(e)
-        self.waitUntilSettled()
-
-        self.assertEqual(len(self.history), 0)
-
     def test_new_patchset_dequeues_old(self):
         "Test that a new patchset causes the old to be dequeued"
         # D -> C (depends on B) -> B (depends on A) -> A -> M
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         M = self.fake_gerrit.addFakeChange('org/project', 'master', 'M')
         M.setMerged()
 
@@ -1591,27 +1442,27 @@
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
         D = self.fake_gerrit.addFakeChange('org/project', 'master', 'D')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        D.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        D.addApproval('code-review', 2)
 
         C.setDependsOn(B, 1)
         B.setDependsOn(A, 1)
         A.setDependsOn(M, 1)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(D.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(D.addApproval('approved', 1))
         self.waitUntilSettled()
 
         B.addPatchset()
         self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(2))
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -1627,11 +1478,12 @@
     def test_new_patchset_check(self):
         "Test a new patchset in check"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        check_pipeline = self.sched.layout.pipelines['check']
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        check_pipeline = tenant.layout.pipelines['check']
 
         # Add two git-dependent changes
         B.setDependsOn(A, 1)
@@ -1704,8 +1556,8 @@
         self.waitUntilSettled()
         self.builds[0].release()
         self.waitUntilSettled()
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.reported, 1)
@@ -1722,11 +1574,11 @@
     def test_abandoned_gate(self):
         "Test that an abandoned change is dequeued from gate"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(len(self.builds), 1, "One job being built (on hold)")
         self.assertEqual(self.builds[0].name, 'project-merge')
@@ -1734,24 +1586,25 @@
         self.fake_gerrit.addEvent(A.getChangeAbandonedEvent())
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
-        self.assertEqual(len(self.builds), 0, "No job running")
-        self.assertEqual(len(self.history), 1, "Only one build in history")
-        self.assertEqual(self.history[0].result, 'ABORTED',
-                         "Build should have been aborted")
+        self.assertBuilds([])
+        self.assertHistory([
+            dict(name='project-merge', result='ABORTED', changes='1,1')],
+            ordered=False)
         self.assertEqual(A.reported, 1,
                          "Abandoned gate change should report only start")
 
     def test_abandoned_check(self):
         "Test that an abandoned change is dequeued from check"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        check_pipeline = self.sched.layout.pipelines['check']
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        check_pipeline = tenant.layout.pipelines['check']
 
         # Add two git-dependent changes
         B.setDependsOn(A, 1)
@@ -1787,8 +1640,8 @@
         self.assertEqual(items[1].change.number, '2')
         self.assertTrue(items[1].live)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(len(self.history), 4)
@@ -1800,23 +1653,21 @@
     def test_abandoned_not_timer(self):
         "Test that an abandoned change does not cancel timer jobs"
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         # Start timer trigger - also org/project
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-idle.yaml')
+        self.updateConfigLayout('layout-idle')
         self.sched.reconfigure(self.config)
-        self.registerJobs()
         # The pipeline triggers every second, so we should have seen
         # several by now.
         time.sleep(5)
         self.waitUntilSettled()
         # Stop queuing timer triggered jobs so that the assertions
         # below don't race against more jobs being queued.
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-no-timer.yaml')
+        # Must be in same repo, so overwrite config with another one
+        self.commitLayoutUpdate('layout-idle', 'layout-no-timer')
+
         self.sched.reconfigure(self.config)
-        self.registerJobs()
         self.assertEqual(len(self.builds), 2, "Two timer jobs")
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
@@ -1829,58 +1680,58 @@
 
         self.assertEqual(len(self.builds), 2, "Two timer jobs remain")
 
-        self.worker.release()
+        self.launch_server.release()
         self.waitUntilSettled()
 
     def test_zuul_url_return(self):
         "Test if ZUUL_URL is returning when zuul_url is set in zuul.conf"
         self.assertTrue(self.sched.config.has_option('merger', 'zuul_url'))
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 1)
         for build in self.builds:
             self.assertTrue('ZUUL_URL' in build.parameters)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
     def test_new_patchset_dequeues_old_on_head(self):
         "Test that a new patchset causes the old to be dequeued (at head)"
         # D -> C (depends on B) -> B (depends on A) -> A -> M
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         M = self.fake_gerrit.addFakeChange('org/project', 'master', 'M')
         M.setMerged()
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
         D = self.fake_gerrit.addFakeChange('org/project', 'master', 'D')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        D.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        D.addApproval('code-review', 2)
 
         C.setDependsOn(B, 1)
         B.setDependsOn(A, 1)
         A.setDependsOn(M, 1)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(D.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(D.addApproval('approved', 1))
         self.waitUntilSettled()
 
         A.addPatchset()
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2))
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
@@ -1895,25 +1746,25 @@
 
     def test_new_patchset_dequeues_old_without_dependents(self):
         "Test that a new patchset causes only the old to be dequeued"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         B.addPatchset()
         self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(2))
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -1926,7 +1777,7 @@
 
     def test_new_patchset_dequeues_old_independent_queue(self):
         "Test that a new patchset causes the old to be dequeued (independent)"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
@@ -1939,8 +1790,8 @@
         self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(2))
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
@@ -1955,8 +1806,8 @@
     def test_noop_job(self):
         "Test that the internal noop job works"
         A = self.fake_gerrit.addFakeChange('org/noop-project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(len(self.gearman_server.getQueue()), 0)
@@ -1976,7 +1827,8 @@
         self.assertEqual(A.reported, False)
 
         # Check queue is empty afterwards
-        check_pipeline = self.sched.layout.pipelines['check']
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        check_pipeline = tenant.layout.pipelines['check']
         items = check_pipeline.getAllItems()
         self.assertEqual(len(items), 0)
 
@@ -1984,7 +1836,7 @@
 
     def test_zuul_refs(self):
         "Test that zuul refs exist and have the right changes"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         M1 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'M1')
         M1.setMerged()
         M2 = self.fake_gerrit.addFakeChange('org/project2', 'master', 'M2')
@@ -1994,35 +1846,42 @@
         B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C')
         D = self.fake_gerrit.addFakeChange('org/project2', 'master', 'D')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        D.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(D.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        D.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(D.addApproval('approved', 1))
 
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         a_zref = b_zref = c_zref = d_zref = None
+        a_build = b_build = c_build = d_build = None
         for x in self.builds:
             if x.parameters['ZUUL_CHANGE'] == '3':
                 a_zref = x.parameters['ZUUL_REF']
-            if x.parameters['ZUUL_CHANGE'] == '4':
+                a_build = x
+            elif x.parameters['ZUUL_CHANGE'] == '4':
                 b_zref = x.parameters['ZUUL_REF']
-            if x.parameters['ZUUL_CHANGE'] == '5':
+                b_build = x
+            elif x.parameters['ZUUL_CHANGE'] == '5':
                 c_zref = x.parameters['ZUUL_REF']
-            if x.parameters['ZUUL_CHANGE'] == '6':
+                c_build = x
+            elif x.parameters['ZUUL_CHANGE'] == '6':
                 d_zref = x.parameters['ZUUL_REF']
+                d_build = x
+            if a_build and b_build and c_build and d_build:
+                break
 
         # There are... four... refs.
         self.assertIsNotNone(a_zref)
@@ -2034,30 +1893,23 @@
         refs = set([a_zref, b_zref, c_zref, d_zref])
         self.assertEqual(len(refs), 4)
 
-        # a ref should have a, not b, and should not be in project2
-        self.assertTrue(self.ref_has_change(a_zref, A))
-        self.assertFalse(self.ref_has_change(a_zref, B))
-        self.assertFalse(self.ref_has_change(a_zref, M2))
+        # should have a, not b, and should not be in project2
+        self.assertTrue(a_build.hasChanges(A))
+        self.assertFalse(a_build.hasChanges(B, M2))
 
-        # b ref should have a and b, and should not be in project2
-        self.assertTrue(self.ref_has_change(b_zref, A))
-        self.assertTrue(self.ref_has_change(b_zref, B))
-        self.assertFalse(self.ref_has_change(b_zref, M2))
+        # should have a and b, and should not be in project2
+        self.assertTrue(b_build.hasChanges(A, B))
+        self.assertFalse(b_build.hasChanges(M2))
 
-        # c ref should have a and b in 1, c in 2
-        self.assertTrue(self.ref_has_change(c_zref, A))
-        self.assertTrue(self.ref_has_change(c_zref, B))
-        self.assertTrue(self.ref_has_change(c_zref, C))
-        self.assertFalse(self.ref_has_change(c_zref, D))
+        # should have a and b in 1, c in 2
+        self.assertTrue(c_build.hasChanges(A, B, C))
+        self.assertFalse(c_build.hasChanges(D))
 
-        # d ref should have a and b in 1, c and d in 2
-        self.assertTrue(self.ref_has_change(d_zref, A))
-        self.assertTrue(self.ref_has_change(d_zref, B))
-        self.assertTrue(self.ref_has_change(d_zref, C))
-        self.assertTrue(self.ref_has_change(d_zref, D))
+        # should have a and b in 1, c and d in 2
+        self.assertTrue(d_build.hasChanges(A, B, C, D))
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -2071,17 +1923,17 @@
 
     def test_rerun_on_error(self):
         "Test that if a worker fails to run a job, it is run again"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.builds[0].run_error = True
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.builds[0].requeue = True
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
-        self.assertEqual(self.countJobResults(self.history, 'RUN_ERROR'), 1)
+        self.assertEqual(self.countJobResults(self.history, None), 1)
         self.assertEqual(self.countJobResults(self.history, 'SUCCESS'), 3)
 
     def test_statsd(self):
@@ -2095,6 +1947,7 @@
         self.assertReportedStat('test-timing', '3|ms')
         self.assertReportedStat('test-gauge', '12|g')
 
+    @skip("Disabled for early v3 development")
     def test_stuck_job_cleanup(self):
         "Test that pending jobs are cleaned up if removed from layout"
         # This job won't be registered at startup because it is not in
@@ -2104,13 +1957,13 @@
         self.worker.registerFunction('build:gate-noop')
         self.gearman_server.hold_jobs_in_queue = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(len(self.gearman_server.getQueue()), 1)
 
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-no-jobs.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-no-jobs.yaml')
         self.sched.reconfigure(self.config)
         self.waitUntilSettled()
 
@@ -2135,7 +1988,7 @@
         #   Use '--' to separate filenames from revisions'
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addPatchset(['HEAD'])
+        A.addPatchset({'HEAD': ''})
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
 
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2))
@@ -2150,12 +2003,12 @@
     def test_file_jobs(self):
         "Test that file jobs run only when appropriate"
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addPatchset(['pip-requires'])
+        A.addPatchset({'pip-requires': 'foo'})
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
         testfile_jobs = [x for x in self.history
@@ -2168,80 +2021,81 @@
         self.assertEqual(B.data['status'], 'MERGED')
         self.assertEqual(B.reported, 2)
 
-    def _test_skip_if_jobs(self, branch, should_skip):
-        "Test that jobs with a skip-if filter run only when appropriate"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-skip-if.yaml')
+    def _test_irrelevant_files_jobs(self, should_skip):
+        "Test that jobs with irrelevant-files filter run only when appropriate"
+        self.updateConfigLayout('layout-irrelevant-files')
         self.sched.reconfigure(self.config)
-        self.registerJobs()
+
+        if should_skip:
+            files = {'ignoreme': 'ignored\n'}
+        else:
+            files = {'respectme': 'please!\n'}
 
         change = self.fake_gerrit.addFakeChange('org/project',
-                                                branch,
-                                                'test skip-if')
+                                                'master',
+                                                'test irrelevant-files',
+                                                files=files)
         self.fake_gerrit.addEvent(change.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
 
         tested_change_ids = [x.changes[0] for x in self.history
-                             if x.name == 'project-test-skip-if']
+                             if x.name == 'project-test-irrelevant-files']
 
         if should_skip:
             self.assertEqual([], tested_change_ids)
         else:
             self.assertIn(change.data['number'], tested_change_ids)
 
-    def test_skip_if_match_skips_job(self):
-        self._test_skip_if_jobs(branch='master', should_skip=True)
+    def test_irrelevant_files_match_skips_job(self):
+        self._test_irrelevant_files_jobs(should_skip=True)
 
-    def test_skip_if_no_match_runs_job(self):
-        self._test_skip_if_jobs(branch='mp', should_skip=False)
+    def test_irrelevant_files_no_match_runs_job(self):
+        self._test_irrelevant_files_jobs(should_skip=False)
 
+    def test_inherited_jobs_keep_matchers(self):
+        self.updateConfigLayout('layout-inheritance')
+        self.sched.reconfigure(self.config)
+
+        files = {'ignoreme': 'ignored\n'}
+
+        change = self.fake_gerrit.addFakeChange('org/project',
+                                                'master',
+                                                'test irrelevant-files',
+                                                files=files)
+        self.fake_gerrit.addEvent(change.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        run_jobs = set([build.name for build in self.history])
+
+        self.assertEqual(set(['project-test-nomatch-starts-empty',
+                              'project-test-nomatch-starts-full']), run_jobs)
+
+    @skip("Disabled for early v3 development")
     def test_test_config(self):
         "Test that we can test the config"
-        self.sched.testConfig(self.config.get('zuul', 'layout_config'),
+        self.sched.testConfig(self.config.get('zuul', 'tenant_config'),
                               self.connections)
 
-    def test_build_description(self):
-        "Test that build descriptions update"
-        self.worker.registerFunction('set_description:' +
-                                     self.worker.worker_id)
-
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.waitUntilSettled()
-        desc = self.history[0].description
-        self.log.debug("Description: %s" % desc)
-        self.assertTrue(re.search("Branch.*master", desc))
-        self.assertTrue(re.search("Pipeline.*gate", desc))
-        self.assertTrue(re.search("project-merge.*SUCCESS", desc))
-        self.assertTrue(re.search("project-test1.*SUCCESS", desc))
-        self.assertTrue(re.search("project-test2.*SUCCESS", desc))
-        self.assertTrue(re.search("Reported result.*SUCCESS", desc))
-
     def test_queue_names(self):
         "Test shared change queue names"
-        project1 = self.sched.layout.projects['org/project1']
-        project2 = self.sched.layout.projects['org/project2']
-        q1 = self.sched.layout.pipelines['gate'].getQueue(project1)
-        q2 = self.sched.layout.pipelines['gate'].getQueue(project2)
-        self.assertEqual(q1.name, 'integration')
-        self.assertEqual(q2.name, 'integration')
-
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-bad-queue.yaml')
-        with testtools.ExpectedException(
-            Exception, "More than one name assigned to change queue"):
-            self.sched.reconfigure(self.config)
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        source = tenant.layout.pipelines['gate'].source
+        project1 = source.getProject('org/project1')
+        project2 = source.getProject('org/project2')
+        q1 = tenant.layout.pipelines['gate'].getQueue(project1)
+        q2 = tenant.layout.pipelines['gate'].getQueue(project2)
+        self.assertEqual(q1.name, 'integrated')
+        self.assertEqual(q2.name, 'integrated')
 
     def test_queue_precedence(self):
         "Test that queue precedence works"
 
         self.gearman_server.hold_jobs_in_queue = True
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
 
         self.waitUntilSettled()
         self.gearman_server.hold_jobs_in_queue = False
@@ -2250,7 +2104,7 @@
 
         # Run one build at a time to ensure non-race order:
         self.orderedRelease()
-        self.worker.hold_jobs_in_build = False
+        self.launch_server.hold_jobs_in_build = False
         self.waitUntilSettled()
 
         self.log.debug(self.history)
@@ -2263,18 +2117,19 @@
 
     def test_json_status(self):
         "Test that we can retrieve JSON status info"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('project-merge')
+        self.launch_server.release('project-merge')
         self.waitUntilSettled()
 
         port = self.webapp.server.socket.getsockname()[1]
 
-        req = urllib.request.Request("http://localhost:%s/status.json" % port)
+        req = urllib.request.Request(
+            "http://localhost:%s/tenant-one/status" % port)
         f = urllib.request.urlopen(req)
         headers = f.info()
         self.assertIn('Content-Length', headers)
@@ -2287,8 +2142,8 @@
         self.assertIn('Expires', headers)
         data = f.read()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         data = json.loads(data)
@@ -2308,35 +2163,34 @@
         self.assertEqual('project-merge', status_jobs[0]['name'])
         self.assertEqual('https://server/job/project-merge/0/',
                          status_jobs[0]['url'])
-        self.assertEqual('http://logs.example.com/1/1/gate/project-merge/0',
+        self.assertEqual('https://server/job/project-merge/0/',
                          status_jobs[0]['report_url'])
-
         self.assertEqual('project-test1', status_jobs[1]['name'])
-        self.assertEqual('https://server/job/project-test1/1/',
+        self.assertEqual('https://server/job/project-test1/0/',
                          status_jobs[1]['url'])
-        self.assertEqual('http://logs.example.com/1/1/gate/project-test1/1',
+        self.assertEqual('https://server/job/project-test1/0/',
                          status_jobs[1]['report_url'])
 
         self.assertEqual('project-test2', status_jobs[2]['name'])
-        self.assertEqual('https://server/job/project-test2/2/',
+        self.assertEqual('https://server/job/project-test2/0/',
                          status_jobs[2]['url'])
-        self.assertEqual('http://logs.example.com/1/1/gate/project-test2/2',
+        self.assertEqual('https://server/job/project-test2/0/',
                          status_jobs[2]['report_url'])
 
+    @skip("Disabled for early v3 development")
     def test_merging_queues(self):
         "Test that transitively-connected change queues are merged"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-merge-queues.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-merge-queues.yaml')
         self.sched.reconfigure(self.config)
         self.assertEqual(len(self.sched.layout.pipelines['gate'].queues), 1)
 
     def test_mutex(self):
         "Test job mutexes"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-mutex.yaml')
+        self.updateConfigLayout('layout-mutex')
         self.sched.reconfigure(self.config)
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         self.assertFalse('test-mutex' in self.sched.mutex.mutexes)
@@ -2349,7 +2203,7 @@
         self.assertEqual(self.builds[1].name, 'mutex-one')
         self.assertEqual(self.builds[2].name, 'project-test1')
 
-        self.worker.release('mutex-one')
+        self.launch_server.release('mutex-one')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 3)
@@ -2358,7 +2212,7 @@
         self.assertEqual(self.builds[2].name, 'mutex-two')
         self.assertTrue('test-mutex' in self.sched.mutex.mutexes)
 
-        self.worker.release('mutex-two')
+        self.launch_server.release('mutex-two')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 3)
@@ -2367,7 +2221,7 @@
         self.assertEqual(self.builds[2].name, 'mutex-one')
         self.assertTrue('test-mutex' in self.sched.mutex.mutexes)
 
-        self.worker.release('mutex-one')
+        self.launch_server.release('mutex-one')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 3)
@@ -2376,7 +2230,7 @@
         self.assertEqual(self.builds[2].name, 'mutex-two')
         self.assertTrue('test-mutex' in self.sched.mutex.mutexes)
 
-        self.worker.release('mutex-two')
+        self.launch_server.release('mutex-two')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 2)
@@ -2384,8 +2238,8 @@
         self.assertEqual(self.builds[1].name, 'project-test1')
         self.assertFalse('test-mutex' in self.sched.mutex.mutexes)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
 
         self.waitUntilSettled()
         self.assertEqual(len(self.builds), 0)
@@ -2396,11 +2250,13 @@
 
     def test_mutex_abandon(self):
         "Test abandon with job mutexes"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-mutex.yaml')
+        self.updateConfigLayout('layout-mutex')
         self.sched.reconfigure(self.config)
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
+
+        tenant = self.sched.abide.tenants.get('openstack')
+        check_pipeline = tenant.layout.pipelines['check']
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         self.assertFalse('test-mutex' in self.sched.mutex.mutexes)
@@ -2414,19 +2270,22 @@
         self.waitUntilSettled()
 
         # The check pipeline should be empty
-        items = self.sched.layout.pipelines['check'].getAllItems()
+        items = check_pipeline.getAllItems()
         self.assertEqual(len(items), 0)
 
         # The mutex should be released
         self.assertFalse('test-mutex' in self.sched.mutex.mutexes)
 
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
+        self.waitUntilSettled()
+
     def test_mutex_reconfigure(self):
         "Test reconfigure with job mutexes"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-mutex.yaml')
+        self.updateConfigLayout('layout-mutex')
         self.sched.reconfigure(self.config)
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         self.assertFalse('test-mutex' in self.sched.mutex.mutexes)
@@ -2436,47 +2295,32 @@
 
         self.assertTrue('test-mutex' in self.sched.mutex.mutexes)
 
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-mutex-reconfiguration.yaml')
+        self.updateConfigLayout('layout-mutex-reconfiguration')
         self.sched.reconfigure(self.config)
         self.waitUntilSettled()
 
-        self.worker.release('project-test1')
+        self.launch_server.release('project-test1')
         self.waitUntilSettled()
 
-        # The check pipeline should be empty
-        items = self.sched.layout.pipelines['check'].getAllItems()
-        self.assertEqual(len(items), 0)
+        # There should be no builds anymore
+        self.assertEqual(len(self.builds), 0)
 
         # The mutex should be released
         self.assertFalse('test-mutex' in self.sched.mutex.mutexes)
 
-    def test_node_label(self):
-        "Test that a job runs on a specific node label"
-        self.worker.registerFunction('build:node-project-test1:debian')
-
-        A = self.fake_gerrit.addFakeChange('org/node-project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.waitUntilSettled()
-
-        self.assertIsNone(self.getJobFromHistory('node-project-merge').node)
-        self.assertEqual(self.getJobFromHistory('node-project-test1').node,
-                         'debian')
-        self.assertIsNone(self.getJobFromHistory('node-project-test2').node)
-
     def test_live_reconfiguration(self):
         "Test that live reconfiguration works"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.sched.reconfigure(self.config)
+        self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
         self.assertEqual(self.getJobFromHistory('project-merge').result,
                          'SUCCESS')
@@ -2487,13 +2331,14 @@
         self.assertEqual(A.data['status'], 'MERGED')
         self.assertEqual(A.reported, 2)
 
+    @skip("Disabled for early v3 development")
     def test_live_reconfiguration_merge_conflict(self):
         # A real-world bug: a change in a gate queue has a merge
         # conflict and a job is added to its project while it's
         # sitting in the queue.  The job gets added to the change and
         # enqueued and the change gets stuck.
         self.worker.registerFunction('build:project-test3')
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         # This change is fine.  It's here to stop the queue long
         # enough for the next change to be subject to the
@@ -2501,17 +2346,17 @@
         # next change.  This change will succeed and merge.
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         A.addPatchset(['conflict'])
-        A.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
 
         # This change will be in merge conflict.  During the
         # reconfiguration, we will add a job.  We want to make sure
         # that doesn't cause it to get stuck.
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         B.addPatchset(['conflict'])
-        B.addApproval('CRVW', 2)
+        B.addApproval('code-review', 2)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
@@ -2523,14 +2368,13 @@
         self.assertEqual(len(self.history), 0)
 
         # Add the "project-test3" job.
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-live-'
-                        'reconfiguration-add-job.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-live-reconfiguration-add-job.yaml')
         self.sched.reconfigure(self.config)
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -2547,31 +2391,32 @@
                          'SUCCESS')
         self.assertEqual(len(self.history), 4)
 
+    @skip("Disabled for early v3 development")
     def test_live_reconfiguration_failed_root(self):
         # An extrapolation of test_live_reconfiguration_merge_conflict
         # that tests a job added to a job tree with a failed root does
         # not run.
         self.worker.registerFunction('build:project-test3')
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         # This change is fine.  It's here to stop the queue long
         # enough for the next change to be subject to the
         # reconfiguration.  This change will succeed and merge.
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         A.addPatchset(['conflict'])
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        self.worker.addFailTest('project-merge', B)
-        B.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.launch_server.failJob('project-merge', B)
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # Both -merge jobs have run, but no others.
@@ -2586,14 +2431,13 @@
         self.assertEqual(len(self.history), 2)
 
         # Add the "project-test3" job.
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-live-'
-                        'reconfiguration-add-job.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-live-reconfiguration-add-job.yaml')
         self.sched.reconfigure(self.config)
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -2609,6 +2453,7 @@
         self.assertEqual(self.history[4].result, 'SUCCESS')
         self.assertEqual(len(self.history), 5)
 
+    @skip("Disabled for early v3 development")
     def test_live_reconfiguration_failed_job(self):
         # Test that a change with a removed failing job does not
         # disrupt reconfiguration.  If a change has a failed job and
@@ -2616,18 +2461,18 @@
         # bug where the code to re-set build statuses would run on
         # that build and raise an exception because the job no longer
         # existed.
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
 
         # This change will fail and later be removed by the reconfiguration.
-        self.worker.addFailTest('project-test1', A)
+        self.launch_server.failJob('project-test1', A)
 
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('project-test1')
+        self.launch_server.release('project-test1')
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
@@ -2640,14 +2485,13 @@
         self.assertEqual(len(self.history), 2)
 
         # Remove the test1 job.
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-live-'
-                        'reconfiguration-failed-job.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-live-reconfiguration-failed-job.yaml')
         self.sched.reconfigure(self.config)
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(self.getJobFromHistory('project-test2').result,
@@ -2662,22 +2506,23 @@
         # Ensure the removed job was not included in the report.
         self.assertNotIn('project-test1', A.messages[0])
 
+    @skip("Disabled for early v3 development")
     def test_live_reconfiguration_shared_queue(self):
         # Test that a change with a failing job which was removed from
         # this project but otherwise still exists in the system does
         # not disrupt reconfiguration.
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
 
-        self.worker.addFailTest('project1-project2-integration', A)
+        self.launch_server.failJob('project1-project2-integration', A)
 
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('project1-project2-integration')
+        self.launch_server.release('project1-project2-integration')
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
@@ -2690,14 +2535,13 @@
         self.assertEqual(len(self.history), 2)
 
         # Remove the integration job.
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-live-'
-                        'reconfiguration-shared-queue.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-live-reconfiguration-shared-queue.yaml')
         self.sched.reconfigure(self.config)
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(self.getJobFromHistory('project1-merge').result,
@@ -2716,6 +2560,7 @@
         # Ensure the removed job was not included in the report.
         self.assertNotIn('project1-project2-integration', A.messages[0])
 
+    @skip("Disabled for early v3 development")
     def test_double_live_reconfiguration_shared_queue(self):
         # This was a real-world regression.  A change is added to
         # gate; a reconfigure happens, a second change which depends
@@ -2724,18 +2569,18 @@
 
         # A failure may indicate incorrect caching or cleaning up of
         # references during a reconfiguration.
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
         B.setDependsOn(A, 1)
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
 
         # Add the parent change.
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # Reconfigure (with only one change in the pipeline).
@@ -2743,17 +2588,17 @@
         self.waitUntilSettled()
 
         # Add the child change.
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # Reconfigure (with both in the pipeline).
         self.sched.reconfigure(self.config)
         self.waitUntilSettled()
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(len(self.history), 8)
@@ -2763,11 +2608,12 @@
         self.assertEqual(B.data['status'], 'MERGED')
         self.assertEqual(B.reported, 2)
 
+    @skip("Disabled for early v3 development")
     def test_live_reconfiguration_del_project(self):
         # Test project deletion from layout
         # while changes are enqueued
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project1', 'master', 'C')
@@ -2775,19 +2621,18 @@
         # A Depends-On: B
         A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
             A.subject, B.data['id'])
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
 
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
         self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
         self.assertEqual(len(self.builds), 5)
 
         # This layout defines only org/project, not org/project1
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-live-'
-                        'reconfiguration-del-project.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-live-reconfiguration-del-project.yaml')
         self.sched.reconfigure(self.config)
         self.waitUntilSettled()
 
@@ -2797,8 +2642,8 @@
         self.assertEqual(job_c.changes, '3,1')
         self.assertEqual(job_c.result, 'ABORTED')
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(self.getJobFromHistory('project-test1').changes,
@@ -2814,13 +2659,14 @@
         self.assertEqual(len(self.sched.layout.pipelines['check'].queues), 0)
         self.assertIn('Build succeeded', A.messages[0])
 
+    @skip("Disabled for early v3 development")
     def test_live_reconfiguration_functions(self):
         "Test live reconfiguration with a custom function"
         self.worker.registerFunction('build:node-project-test1:debian')
         self.worker.registerFunction('build:node-project-test1:wheezy')
         A = self.fake_gerrit.addFakeChange('org/node-project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertIsNone(self.getJobFromHistory('node-project-merge').node)
@@ -2828,15 +2674,14 @@
                          'debian')
         self.assertIsNone(self.getJobFromHistory('node-project-test2').node)
 
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-live-'
-                        'reconfiguration-functions.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-live-reconfiguration-functions.yaml')
         self.sched.reconfigure(self.config)
         self.worker.build_history = []
 
         B = self.fake_gerrit.addFakeChange('org/node-project', 'master', 'B')
-        B.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertIsNone(self.getJobFromHistory('node-project-merge').node)
@@ -2844,16 +2689,17 @@
                          'wheezy')
         self.assertIsNone(self.getJobFromHistory('node-project-test2').node)
 
+    @skip("Disabled for early v3 development")
     def test_delayed_repo_init(self):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-delayed-repo-init.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-delayed-repo-init.yaml')
         self.sched.reconfigure(self.config)
 
         self.init_repo("org/new-project")
         A = self.fake_gerrit.addFakeChange('org/new-project', 'master', 'A')
 
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(self.getJobFromHistory('project-merge').result,
                          'SUCCESS')
@@ -2865,15 +2711,14 @@
         self.assertEqual(A.reported, 2)
 
     def test_repo_deleted(self):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-repo-deleted.yaml')
+        self.updateConfigLayout('layout-repo-deleted')
         self.sched.reconfigure(self.config)
 
         self.init_repo("org/delete-project")
         A = self.fake_gerrit.addFakeChange('org/delete-project', 'master', 'A')
 
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(self.getJobFromHistory('project-merge').result,
                          'SUCCESS')
@@ -2885,12 +2730,16 @@
         self.assertEqual(A.reported, 2)
 
         # Delete org/new-project zuul repo. Should be recloned.
-        shutil.rmtree(os.path.join(self.git_root, "org/delete-project"))
+        p = 'org/delete-project'
+        if os.path.exists(os.path.join(self.merger_src_root, p)):
+            shutil.rmtree(os.path.join(self.merger_src_root, p))
+        if os.path.exists(os.path.join(self.launcher_src_root, p)):
+            shutil.rmtree(os.path.join(self.launcher_src_root, p))
 
         B = self.fake_gerrit.addFakeChange('org/delete-project', 'master', 'B')
 
-        B.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
         self.assertEqual(self.getJobFromHistory('project-merge').result,
                          'SUCCESS')
@@ -2901,6 +2750,7 @@
         self.assertEqual(B.data['status'], 'MERGED')
         self.assertEqual(B.reported, 2)
 
+    @skip("Disabled for early v3 development")
     def test_tags(self):
         "Test job tags"
         self.config.set('zuul', 'layout_config',
@@ -2922,11 +2772,9 @@
 
     def test_timer(self):
         "Test that a periodic job is triggered"
-        self.worker.hold_jobs_in_build = True
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-timer.yaml')
+        self.launch_server.hold_jobs_in_build = True
+        self.updateConfigLayout('layout-timer')
         self.sched.reconfigure(self.config)
-        self.registerJobs()
 
         # The pipeline triggers every second, so we should have seen
         # several by now.
@@ -2937,18 +2785,17 @@
 
         port = self.webapp.server.socket.getsockname()[1]
 
-        req = urllib.request.Request("http://localhost:%s/status.json" % port)
+        req = urllib.request.Request(
+            "http://localhost:%s/openstack/status" % port)
         f = urllib.request.urlopen(req)
         data = f.read()
 
-        self.worker.hold_jobs_in_build = False
+        self.launch_server.hold_jobs_in_build = False
         # Stop queuing timer triggered jobs so that the assertions
         # below don't race against more jobs being queued.
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-no-timer.yaml')
+        self.commitLayoutUpdate('layout-timer', 'layout-no-timer')
         self.sched.reconfigure(self.config)
-        self.registerJobs()
-        self.worker.release()
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(self.getJobFromHistory(
@@ -2970,16 +2817,14 @@
 
     def test_idle(self):
         "Test that frequent periodic jobs work"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
+        self.updateConfigLayout('layout-idle')
 
         for x in range(1, 3):
             # Test that timer triggers periodic jobs even across
             # layout config reloads.
             # Start timer trigger
-            self.config.set('zuul', 'layout_config',
-                            'tests/fixtures/layout-idle.yaml')
             self.sched.reconfigure(self.config)
-            self.registerJobs()
             self.waitUntilSettled()
 
             # The pipeline triggers every second, so we should have seen
@@ -2988,21 +2833,23 @@
 
             # Stop queuing timer triggered jobs so that the assertions
             # below don't race against more jobs being queued.
-            self.config.set('zuul', 'layout_config',
-                            'tests/fixtures/layout-no-timer.yaml')
+            before = self.commitLayoutUpdate('layout-idle', 'layout-no-timer')
             self.sched.reconfigure(self.config)
-            self.registerJobs()
             self.waitUntilSettled()
-
-            self.assertEqual(len(self.builds), 2)
-            self.worker.release('.*')
+            self.assertEqual(len(self.builds), 2,
+                             'Timer builds iteration #%d' % x)
+            self.launch_server.release('.*')
             self.waitUntilSettled()
             self.assertEqual(len(self.builds), 0)
             self.assertEqual(len(self.history), x * 2)
+            # Revert back to layout-idle
+            repo = git.Repo(os.path.join(self.test_root,
+                                         'upstream',
+                                         'layout-idle'))
+            repo.git.reset('--hard', before)
 
     def test_check_smtp_pool(self):
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-smtp.yaml')
+        self.updateConfigLayout('layout-smtp')
         self.sched.reconfigure(self.config)
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
@@ -3033,11 +2880,9 @@
 
     def test_timer_smtp(self):
         "Test that a periodic job is triggered"
-        self.worker.hold_jobs_in_build = True
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-timer-smtp.yaml')
+        self.launch_server.hold_jobs_in_build = True
+        self.updateConfigLayout('layout-timer-smtp')
         self.sched.reconfigure(self.config)
-        self.registerJobs()
 
         # The pipeline triggers every second, so we should have seen
         # several by now.
@@ -3045,7 +2890,7 @@
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 2)
-        self.worker.release('.*')
+        self.launch_server.release('.*')
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 2)
 
@@ -3069,14 +2914,13 @@
 
         # Stop queuing timer triggered jobs and let any that may have
         # queued through so that end of test assertions pass.
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-no-timer.yaml')
+        self.commitLayoutUpdate('layout-timer-smtp', 'layout-no-timer')
         self.sched.reconfigure(self.config)
-        self.registerJobs()
         self.waitUntilSettled()
-        self.worker.release('.*')
+        self.launch_server.release('.*')
         self.waitUntilSettled()
 
+    @skip("Disabled for early v3 development")
     def test_timer_sshkey(self):
         "Test that a periodic job can setup SSH key authentication"
         self.worker.hold_jobs_in_build = True
@@ -3123,12 +2967,13 @@
     def test_client_enqueue_change(self):
         "Test that the RPC client can enqueue a change"
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        A.addApproval('APRV', 1)
+        A.addApproval('code-review', 2)
+        A.addApproval('approved', 1)
 
         client = zuul.rpcclient.RPCClient('127.0.0.1',
                                           self.gearman_server.port)
-        r = client.enqueue(pipeline='gate',
+        r = client.enqueue(tenant='tenant-one',
+                           pipeline='gate',
                            project='org/project',
                            trigger='gerrit',
                            change='1,1')
@@ -3149,6 +2994,7 @@
         client = zuul.rpcclient.RPCClient('127.0.0.1',
                                           self.gearman_server.port)
         r = client.enqueue_ref(
+            tenant='tenant-one',
             pipeline='post',
             project='org/project',
             trigger='gerrit',
@@ -3166,8 +3012,19 @@
         client = zuul.rpcclient.RPCClient('127.0.0.1',
                                           self.gearman_server.port)
         with testtools.ExpectedException(zuul.rpcclient.RPCFailure,
+                                         "Invalid tenant"):
+            r = client.enqueue(tenant='tenant-foo',
+                               pipeline='gate',
+                               project='org/project',
+                               trigger='gerrit',
+                               change='1,1')
+            client.shutdown()
+            self.assertEqual(r, False)
+
+        with testtools.ExpectedException(zuul.rpcclient.RPCFailure,
                                          "Invalid project"):
-            r = client.enqueue(pipeline='gate',
+            r = client.enqueue(tenant='tenant-one',
+                               pipeline='gate',
                                project='project-does-not-exist',
                                trigger='gerrit',
                                change='1,1')
@@ -3176,7 +3033,8 @@
 
         with testtools.ExpectedException(zuul.rpcclient.RPCFailure,
                                          "Invalid pipeline"):
-            r = client.enqueue(pipeline='pipeline-does-not-exist',
+            r = client.enqueue(tenant='tenant-one',
+                               pipeline='pipeline-does-not-exist',
                                project='org/project',
                                trigger='gerrit',
                                change='1,1')
@@ -3185,7 +3043,8 @@
 
         with testtools.ExpectedException(zuul.rpcclient.RPCFailure,
                                          "Invalid trigger"):
-            r = client.enqueue(pipeline='gate',
+            r = client.enqueue(tenant='tenant-one',
+                               pipeline='gate',
                                project='org/project',
                                trigger='trigger-does-not-exist',
                                change='1,1')
@@ -3194,7 +3053,8 @@
 
         with testtools.ExpectedException(zuul.rpcclient.RPCFailure,
                                          "Invalid change"):
-            r = client.enqueue(pipeline='gate',
+            r = client.enqueue(tenant='tenant-one',
+                               pipeline='gate',
                                project='org/project',
                                trigger='gerrit',
                                change='1,1')
@@ -3207,42 +3067,44 @@
 
     def test_client_promote(self):
         "Test that the RPC client can promote a change"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
-        items = self.sched.layout.pipelines['gate'].getAllItems()
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        items = tenant.layout.pipelines['gate'].getAllItems()
         enqueue_times = {}
         for item in items:
             enqueue_times[str(item.change)] = item.enqueue_time
 
         client = zuul.rpcclient.RPCClient('127.0.0.1',
                                           self.gearman_server.port)
-        r = client.promote(pipeline='gate',
+        r = client.promote(tenant='tenant-one',
+                           pipeline='gate',
                            change_ids=['2,1', '3,1'])
 
         # ensure that enqueue times are durable
-        items = self.sched.layout.pipelines['gate'].getAllItems()
+        items = tenant.layout.pipelines['gate'].getAllItems()
         for item in items:
             self.assertEqual(
                 enqueue_times[str(item.change)], item.enqueue_time)
 
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 6)
@@ -3253,19 +3115,19 @@
         self.assertEqual(self.builds[4].name, 'project-test1')
         self.assertEqual(self.builds[5].name, 'project-test2')
 
-        self.assertTrue(self.job_has_changes(self.builds[0], B))
-        self.assertFalse(self.job_has_changes(self.builds[0], A))
-        self.assertFalse(self.job_has_changes(self.builds[0], C))
+        self.assertTrue(self.builds[0].hasChanges(B))
+        self.assertFalse(self.builds[0].hasChanges(A))
+        self.assertFalse(self.builds[0].hasChanges(C))
 
-        self.assertTrue(self.job_has_changes(self.builds[2], B))
-        self.assertTrue(self.job_has_changes(self.builds[2], C))
-        self.assertFalse(self.job_has_changes(self.builds[2], A))
+        self.assertTrue(self.builds[2].hasChanges(B))
+        self.assertTrue(self.builds[2].hasChanges(C))
+        self.assertFalse(self.builds[2].hasChanges(A))
 
-        self.assertTrue(self.job_has_changes(self.builds[4], B))
-        self.assertTrue(self.job_has_changes(self.builds[4], C))
-        self.assertTrue(self.job_has_changes(self.builds[4], A))
+        self.assertTrue(self.builds[4].hasChanges(B))
+        self.assertTrue(self.builds[4].hasChanges(C))
+        self.assertTrue(self.builds[4].hasChanges(A))
 
-        self.worker.release()
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -3282,34 +3144,35 @@
         "Test that the RPC client can promote a dependent change"
         # C (depends on B) -> B -> A ; then promote C to get:
         # A -> C (depends on B) -> B
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
 
         C.setDependsOn(B, 1)
 
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
 
         self.waitUntilSettled()
 
         client = zuul.rpcclient.RPCClient('127.0.0.1',
                                           self.gearman_server.port)
-        r = client.promote(pipeline='gate',
+        r = client.promote(tenant='tenant-one',
+                           pipeline='gate',
                            change_ids=['3,1'])
 
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         self.assertEqual(len(self.builds), 6)
@@ -3320,19 +3183,19 @@
         self.assertEqual(self.builds[4].name, 'project-test1')
         self.assertEqual(self.builds[5].name, 'project-test2')
 
-        self.assertTrue(self.job_has_changes(self.builds[0], B))
-        self.assertFalse(self.job_has_changes(self.builds[0], A))
-        self.assertFalse(self.job_has_changes(self.builds[0], C))
+        self.assertTrue(self.builds[0].hasChanges(B))
+        self.assertFalse(self.builds[0].hasChanges(A))
+        self.assertFalse(self.builds[0].hasChanges(C))
 
-        self.assertTrue(self.job_has_changes(self.builds[2], B))
-        self.assertTrue(self.job_has_changes(self.builds[2], C))
-        self.assertFalse(self.job_has_changes(self.builds[2], A))
+        self.assertTrue(self.builds[2].hasChanges(B))
+        self.assertTrue(self.builds[2].hasChanges(C))
+        self.assertFalse(self.builds[2].hasChanges(A))
 
-        self.assertTrue(self.job_has_changes(self.builds[4], B))
-        self.assertTrue(self.job_has_changes(self.builds[4], C))
-        self.assertTrue(self.job_has_changes(self.builds[4], A))
+        self.assertTrue(self.builds[4].hasChanges(B))
+        self.assertTrue(self.builds[4].hasChanges(C))
+        self.assertTrue(self.builds[4].hasChanges(A))
 
-        self.worker.release()
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -3347,51 +3210,54 @@
 
     def test_client_promote_negative(self):
         "Test that the RPC client returns errors for promotion"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         client = zuul.rpcclient.RPCClient('127.0.0.1',
                                           self.gearman_server.port)
 
         with testtools.ExpectedException(zuul.rpcclient.RPCFailure):
-            r = client.promote(pipeline='nonexistent',
+            r = client.promote(tenant='tenant-one',
+                               pipeline='nonexistent',
                                change_ids=['2,1', '3,1'])
             client.shutdown()
             self.assertEqual(r, False)
 
         with testtools.ExpectedException(zuul.rpcclient.RPCFailure):
-            r = client.promote(pipeline='gate',
+            r = client.promote(tenant='tenant-one',
+                               pipeline='gate',
                                change_ids=['4,1'])
             client.shutdown()
             self.assertEqual(r, False)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
+    @skip("Disabled for early v3 development")
     def test_queue_rate_limiting(self):
         "Test that DependentPipelines are rate limited with dep across window"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-rate-limit.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-rate-limit.yaml')
         self.sched.reconfigure(self.config)
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
 
         C.setDependsOn(B, 1)
-        self.worker.addFailTest('project-test1', A)
+        self.launch_server.failJob('project-test1', A)
 
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
 
         # Only A and B will have their merge jobs queued because
@@ -3400,9 +3266,9 @@
         self.assertEqual(self.builds[0].name, 'project-merge')
         self.assertEqual(self.builds[1].name, 'project-merge')
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # Only A and B will have their test jobs queued because
@@ -3413,7 +3279,7 @@
         self.assertEqual(self.builds[2].name, 'project-test1')
         self.assertEqual(self.builds[3].name, 'project-test2')
 
-        self.worker.release('project-.*')
+        self.launch_server.release('project-.*')
         self.waitUntilSettled()
 
         queue = self.sched.layout.pipelines['gate'].queues[0]
@@ -3427,7 +3293,7 @@
         self.assertEqual(len(self.builds), 1)
         self.assertEqual(self.builds[0].name, 'project-merge')
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # Only B's test jobs are queued because window is still 1.
@@ -3435,7 +3301,7 @@
         self.assertEqual(self.builds[0].name, 'project-test1')
         self.assertEqual(self.builds[1].name, 'project-test2')
 
-        self.worker.release('project-.*')
+        self.launch_server.release('project-.*')
         self.waitUntilSettled()
 
         # B was successfully merged so window is increased to 2.
@@ -3447,7 +3313,7 @@
         self.assertEqual(len(self.builds), 1)
         self.assertEqual(self.builds[0].name, 'project-merge')
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # After successful merge job the test jobs for C are queued.
@@ -3455,7 +3321,7 @@
         self.assertEqual(self.builds[0].name, 'project-test1')
         self.assertEqual(self.builds[1].name, 'project-test2')
 
-        self.worker.release('project-.*')
+        self.launch_server.release('project-.*')
         self.waitUntilSettled()
 
         # C successfully merged so window is bumped to 3.
@@ -3463,27 +3329,28 @@
         self.assertEqual(queue.window_floor, 1)
         self.assertEqual(C.data['status'], 'MERGED')
 
+    @skip("Disabled for early v3 development")
     def test_queue_rate_limiting_dependent(self):
         "Test that DependentPipelines are rate limited with dep in window"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-rate-limit.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-rate-limit.yaml')
         self.sched.reconfigure(self.config)
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
 
         B.setDependsOn(A, 1)
 
-        self.worker.addFailTest('project-test1', A)
+        self.launch_server.failJob('project-test1', A)
 
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
 
         # Only A and B will have their merge jobs queued because
@@ -3492,9 +3359,9 @@
         self.assertEqual(self.builds[0].name, 'project-merge')
         self.assertEqual(self.builds[1].name, 'project-merge')
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # Only A and B will have their test jobs queued because
@@ -3505,7 +3372,7 @@
         self.assertEqual(self.builds[2].name, 'project-test1')
         self.assertEqual(self.builds[3].name, 'project-test2')
 
-        self.worker.release('project-.*')
+        self.launch_server.release('project-.*')
         self.waitUntilSettled()
 
         queue = self.sched.layout.pipelines['gate'].queues[0]
@@ -3520,7 +3387,7 @@
         self.assertEqual(len(self.builds), 1)
         self.assertEqual(self.builds[0].name, 'project-merge')
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
 
         # Only C's test jobs are queued because window is still 1.
@@ -3528,7 +3395,7 @@
         self.assertEqual(self.builds[0].name, 'project-test1')
         self.assertEqual(self.builds[1].name, 'project-test2')
 
-        self.worker.release('project-.*')
+        self.launch_server.release('project-.*')
         self.waitUntilSettled()
 
         # C was successfully merged so window is increased to 2.
@@ -3536,13 +3403,14 @@
         self.assertEqual(queue.window_floor, 1)
         self.assertEqual(C.data['status'], 'MERGED')
 
+    @skip("Disabled for early v3 development")
     def test_worker_update_metadata(self):
         "Test if a worker can send back metadata about itself"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(len(self.launcher.builds), 1)
@@ -3570,55 +3438,46 @@
         self.assertEqual("v1.1", build.worker.version)
         self.assertEqual({'something': 'else'}, build.worker.extra)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
     def test_footer_message(self):
         "Test a pipeline's footer message is correctly added to the report."
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-footer-message.yaml')
+        self.updateConfigLayout('layout-footer-message')
         self.sched.reconfigure(self.config)
-        self.registerJobs()
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.worker.addFailTest('test1', A)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.launch_server.failJob('project-test1', A)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        B.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(2, len(self.smtp_messages))
 
-        failure_body = """\
+        failure_msg = """\
 Build failed.  For information on how to proceed, see \
-http://wiki.example.org/Test_Failures
+http://wiki.example.org/Test_Failures"""
 
-- test1 http://logs.example.com/1/1/gate/test1/0 : FAILURE in 0s
-- test2 http://logs.example.com/1/1/gate/test2/1 : SUCCESS in 0s
-
+        footer_msg = """\
 For CI problems and help debugging, contact ci@example.org"""
 
-        success_body = """\
-Build succeeded.
+        self.assertTrue(self.smtp_messages[0]['body'].startswith(failure_msg))
+        self.assertTrue(self.smtp_messages[0]['body'].endswith(footer_msg))
+        self.assertFalse(self.smtp_messages[1]['body'].startswith(failure_msg))
+        self.assertTrue(self.smtp_messages[1]['body'].endswith(footer_msg))
 
-- test1 http://logs.example.com/2/1/gate/test1/2 : SUCCESS in 0s
-- test2 http://logs.example.com/2/1/gate/test2/3 : SUCCESS in 0s
-
-For CI problems and help debugging, contact ci@example.org"""
-
-        self.assertEqual(failure_body, self.smtp_messages[0]['body'])
-        self.assertEqual(success_body, self.smtp_messages[1]['body'])
-
+    @skip("Disabled for early v3 development")
     def test_merge_failure_reporters(self):
         """Check that the config is set up correctly"""
 
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-merge-failure.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-merge-failure.yaml')
         self.sched.reconfigure(self.config)
         self.registerJobs()
 
@@ -3659,19 +3518,20 @@
             )
         )
 
+    @skip("Disabled for early v3 development")
     def test_merge_failure_reports(self):
         """Check that when a change fails to merge the correct message is sent
         to the correct reporter"""
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-merge-failure.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-merge-failure.yaml')
         self.sched.reconfigure(self.config)
         self.registerJobs()
 
         # Check a test failure isn't reported to SMTP
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.worker.addFailTest('project-test1', A)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.launch_server.failJob('project-test1', A)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(3, len(self.history))  # 3 jobs
@@ -3683,10 +3543,10 @@
         B.addPatchset(['conflict'])
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
         C.addPatchset(['conflict'])
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(C.addApproval('APRV', 1))
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(6, len(self.history))  # A and B jobs
@@ -3694,6 +3554,7 @@
         self.assertEqual('The merge failed! For more information...',
                          self.smtp_messages[0]['body'])
 
+    @skip("Disabled for early v3 development")
     def test_default_merge_failure_reports(self):
         """Check that the default merge failure reports are correct."""
 
@@ -3702,10 +3563,10 @@
         A.addPatchset(['conflict'])
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
         B.addPatchset(['conflict'])
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(3, len(self.history))  # A jobs
@@ -3719,18 +3580,19 @@
         self.assertNotIn('logs.example.com', B.messages[1])
         self.assertNotIn('SKIPPED', B.messages[1])
 
+    @skip("Disabled for early v3 development")
     def test_swift_instructions(self):
         "Test that the correct swift instructions are sent to the workers"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-swift.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-swift.yaml')
         self.sched.reconfigure(self.config)
         self.registerJobs()
 
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
 
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(
@@ -3760,16 +3622,16 @@
                              parameters['SWIFT_MOSTLY_HMAC_BODY'].split('\n')))
         self.assertIn('SWIFT_MOSTLY_SIGNATURE', self.builds[1].parameters)
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
     def test_client_get_running_jobs(self):
         "Test that the RPC client can get a list of running jobs"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         client = zuul.rpcclient.RPCClient('127.0.0.1',
@@ -3781,7 +3643,7 @@
             if time.time() - start > 10:
                 raise Exception("Timeout waiting for gearman server to report "
                                 + "back to the client")
-            build = self.launcher.builds.values()[0]
+            build = self.launch_client.builds.values()[0]
             if build.worker.name == "My Worker":
                 break
             else:
@@ -3815,8 +3677,8 @@
                 self.assertEqual('gate', job['pipeline'])
                 break
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         running_items = client.get_running_jobs()
@@ -3829,6 +3691,9 @@
                                            'master', 'A')
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
+
+        self.assertEqual(self.getJobFromHistory('project-merge').result,
+                         'SUCCESS')
         self.assertEqual(
             self.getJobFromHistory('experimental-project-test').result,
             'SUCCESS')
@@ -3838,8 +3703,8 @@
         "Test cross-repo dependencies"
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
 
         AM2 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'AM2')
         AM1 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'AM1')
@@ -3867,26 +3732,26 @@
         A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
             A.subject, B.data['id'])
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
         self.assertEqual(B.data['status'], 'NEW')
 
-        for connection in self.connections.values():
+        for connection in self.connections.connections.values():
             connection.maintainCache([])
 
-        self.worker.hold_jobs_in_build = True
-        B.addApproval('APRV', 1)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.launch_server.hold_jobs_in_build = True
+        B.addApproval('approved', 1)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(AM2.queried, 0)
@@ -3896,37 +3761,40 @@
         self.assertEqual(A.reported, 2)
         self.assertEqual(B.reported, 2)
 
-        self.assertEqual(self.getJobFromHistory('project1-merge').changes,
-                         '2,1 1,1')
+        changes = self.getJobFromHistory(
+            'project-merge', 'org/project1').changes
+        self.assertEqual(changes, '2,1 1,1')
 
     def test_crd_branch(self):
         "Test cross-repo dependencies in multiple branches"
+
+        self.create_branch('org/project2', 'mp')
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project2', 'mp', 'C')
         C.data['id'] = B.data['id']
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
         # A Depends-On: B+C
         A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
             A.subject, B.data['id'])
 
-        self.worker.hold_jobs_in_build = True
-        B.addApproval('APRV', 1)
-        C.addApproval('APRV', 1)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.launch_server.hold_jobs_in_build = True
+        B.addApproval('approved', 1)
+        C.addApproval('approved', 1)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -3936,36 +3804,37 @@
         self.assertEqual(B.reported, 2)
         self.assertEqual(C.reported, 2)
 
-        self.assertEqual(self.getJobFromHistory('project1-merge').changes,
-                         '2,1 3,1 1,1')
+        changes = self.getJobFromHistory(
+            'project-merge', 'org/project1').changes
+        self.assertEqual(changes, '2,1 3,1 1,1')
 
     def test_crd_multiline(self):
         "Test multiple depends-on lines in commit"
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
         C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
-        C.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
 
         # A Depends-On: B+C
         A.data['commitMessage'] = '%s\n\nDepends-On: %s\nDepends-On: %s\n' % (
             A.subject, B.data['id'], C.data['id'])
 
-        self.worker.hold_jobs_in_build = True
-        B.addApproval('APRV', 1)
-        C.addApproval('APRV', 1)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.launch_server.hold_jobs_in_build = True
+        B.addApproval('approved', 1)
+        C.addApproval('approved', 1)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -3975,15 +3844,16 @@
         self.assertEqual(B.reported, 2)
         self.assertEqual(C.reported, 2)
 
-        self.assertEqual(self.getJobFromHistory('project1-merge').changes,
-                         '2,1 3,1 1,1')
+        changes = self.getJobFromHistory(
+            'project-merge', 'org/project1').changes
+        self.assertEqual(changes, '2,1 3,1 1,1')
 
     def test_crd_unshared_gate(self):
         "Test cross-repo dependencies in unshared gate queues"
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
 
         # A Depends-On: B
         A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
@@ -3991,8 +3861,8 @@
 
         # A and B do not share a queue, make sure that A is unable to
         # enqueue B (and therefore, A is unable to be enqueued).
-        B.addApproval('APRV', 1)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        B.addApproval('approved', 1)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
@@ -4002,7 +3872,7 @@
         self.assertEqual(len(self.history), 0)
 
         # Enqueue and merge B alone.
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(B.data['status'], 'MERGED')
@@ -4010,7 +3880,7 @@
 
         # Now that B is merged, A should be able to be enqueued and
         # merged.
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -4020,31 +3890,31 @@
         "Test reverse cross-repo dependencies"
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
 
         # A Depends-On: B
 
         A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
             A.subject, B.data['id'])
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
         self.assertEqual(B.data['status'], 'NEW')
 
-        self.worker.hold_jobs_in_build = True
-        A.addApproval('APRV', 1)
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.launch_server.hold_jobs_in_build = True
+        A.addApproval('approved', 1)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.release('.*-merge')
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -4052,15 +3922,16 @@
         self.assertEqual(A.reported, 2)
         self.assertEqual(B.reported, 2)
 
-        self.assertEqual(self.getJobFromHistory('project1-merge').changes,
-                         '2,1 1,1')
+        changes = self.getJobFromHistory(
+            'project-merge', 'org/project1').changes
+        self.assertEqual(changes, '2,1 1,1')
 
     def test_crd_cycle(self):
         "Test cross-repo dependency cycles"
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
 
         # A -> B -> A (via commit-depends)
 
@@ -4069,7 +3940,7 @@
         B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
             B.subject, A.data['id'])
 
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(A.reported, 0)
@@ -4082,15 +3953,15 @@
         self.init_repo("org/unknown")
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/unknown', 'master', 'B')
-        A.addApproval('CRVW', 2)
-        B.addApproval('CRVW', 2)
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
 
         # A Depends-On: B
         A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
             A.subject, B.data['id'])
 
-        B.addApproval('APRV', 1)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        B.addApproval('approved', 1)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         # Unknown projects cannot share a queue with any other
@@ -4104,14 +3975,14 @@
         self.assertEqual(len(self.history), 0)
 
         # Simulate change B being gated outside this layout
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         B.setMerged()
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # Now that B is merged, A should be able to be enqueued and
         # merged.
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'MERGED')
@@ -4122,6 +3993,7 @@
     def test_crd_check(self):
         "Test cross-repo dependencies in independent pipelines"
 
+        self.launch_server.hold_jobs_in_build = True
         self.gearman_server.hold_jobs_in_queue = True
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
@@ -4139,27 +4011,37 @@
         self.gearman_server.release()
         self.waitUntilSettled()
 
-        path = os.path.join(self.git_root, "org/project1")
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+
+        path = os.path.join(self.builds[0].jobdir.src_root, "org/project1")
         repo = git.Repo(path)
         repo_messages = [c.message.strip() for c in repo.iter_commits(ref)]
         repo_messages.reverse()
-        correct_messages = ['initial commit', 'A-1']
+        correct_messages = [
+            'initial commit', 'add content from fixture', 'A-1']
         self.assertEqual(repo_messages, correct_messages)
 
-        path = os.path.join(self.git_root, "org/project2")
+        path = os.path.join(self.builds[0].jobdir.src_root, "org/project2")
         repo = git.Repo(path)
         repo_messages = [c.message.strip() for c in repo.iter_commits(ref)]
         repo_messages.reverse()
-        correct_messages = ['initial commit', 'B-1']
+        correct_messages = [
+            'initial commit', 'add content from fixture', 'B-1']
         self.assertEqual(repo_messages, correct_messages)
 
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
+        self.waitUntilSettled()
+
         self.assertEqual(A.data['status'], 'NEW')
         self.assertEqual(B.data['status'], 'NEW')
         self.assertEqual(A.reported, 1)
         self.assertEqual(B.reported, 0)
 
         self.assertEqual(self.history[0].changes, '2,1 1,1')
-        self.assertEqual(len(self.sched.layout.pipelines['check'].queues), 0)
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0)
 
     def test_crd_check_git_depends(self):
         "Test single-repo dependencies in independent pipelines"
@@ -4186,17 +4068,19 @@
 
         self.assertEqual(self.history[0].changes, '1,1')
         self.assertEqual(self.history[-1].changes, '1,1 2,1')
-        self.assertEqual(len(self.sched.layout.pipelines['check'].queues), 0)
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0)
 
         self.assertIn('Build succeeded', A.messages[0])
         self.assertIn('Build succeeded', B.messages[0])
 
     def test_crd_check_duplicate(self):
         "Test duplicate check in independent pipelines"
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
-        check_pipeline = self.sched.layout.pipelines['check']
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        check_pipeline = tenant.layout.pipelines['check']
 
         # Add two git-dependent changes...
         B.setDependsOn(A, 1)
@@ -4217,8 +4101,8 @@
         # Release jobs in order to avoid races with change A jobs
         # finishing before change B jobs.
         self.orderedRelease()
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(A.data['status'], 'NEW')
@@ -4228,7 +4112,7 @@
 
         self.assertEqual(self.history[0].changes, '1,1 2,1')
         self.assertEqual(self.history[1].changes, '1,1')
-        self.assertEqual(len(self.sched.layout.pipelines['check'].queues), 0)
+        self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0)
 
         self.assertIn('Build succeeded', A.messages[0])
         self.assertIn('Build succeeded', B.messages[0])
@@ -4251,8 +4135,9 @@
 
         # Make sure the items still share a change queue, and the
         # first one is not live.
-        self.assertEqual(len(self.sched.layout.pipelines['check'].queues), 1)
-        queue = self.sched.layout.pipelines['check'].queues[0]
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        self.assertEqual(len(tenant.layout.pipelines['check'].queues), 1)
+        queue = tenant.layout.pipelines['check'].queues[0]
         first_item = queue.queue[0]
         for item in queue.queue:
             self.assertEqual(item.queue, first_item.queue)
@@ -4269,7 +4154,7 @@
         self.assertEqual(B.reported, 0)
 
         self.assertEqual(self.history[0].changes, '2,1 1,1')
-        self.assertEqual(len(self.sched.layout.pipelines['check'].queues), 0)
+        self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0)
 
     def test_crd_check_reconfiguration(self):
         self._test_crd_check_reconfiguration('org/project1', 'org/project2')
@@ -4282,10 +4167,11 @@
         self.init_repo("org/unknown")
         self._test_crd_check_reconfiguration('org/project1', 'org/unknown')
 
+    @skip("Disabled for early v3 development")
     def test_crd_check_ignore_dependencies(self):
         "Test cross-repo dependencies can be ignored"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-ignore-dependencies.yaml')
+        self.updateConfigLayout(
+            'tests/fixtures/layout-ignore-dependencies.yaml')
         self.sched.reconfigure(self.config)
         self.registerJobs()
 
@@ -4327,6 +4213,7 @@
         for job in self.history:
             self.assertEqual(len(job.changes.split()), 1)
 
+    @skip("Disabled for early v3 development")
     def test_crd_check_transitive(self):
         "Test transitive cross-repo dependencies"
         # Specifically, if A -> B -> C, and C gets a new patchset and
@@ -4411,10 +4298,18 @@
         # call the method that would ultimately be called by the event
         # processing.
 
-        source = self.sched.layout.pipelines['gate'].source
+        tenant = self.sched.abide.tenants.get('tenant-one')
+        source = tenant.layout.pipelines['gate'].source
+
+        # TODO(pabelanger): As we add more source / trigger APIs we should make
+        # it easier for users to create events for testing.
+        event = zuul.model.TriggerEvent()
+        event.trigger_name = 'gerrit'
+        event.change_number = '1'
+        event.patch_number = '2'
         with testtools.ExpectedException(
             Exception, "Dependency cycle detected"):
-            source._getChange(u'1', u'2', True)
+            source.getChange(event, True)
         self.log.debug("Got expected dependency cycle exception")
 
         # Now if we update B to remove the depends-on, everything
@@ -4422,20 +4317,22 @@
 
         B.addPatchset()
         B.data['commitMessage'] = '%s\n' % (B.subject,)
-        source._getChange(u'1', u'2', True)
-        source._getChange(u'2', u'2', True)
+
+        source.getChange(event, True)
+        event.change_number = '2'
+        source.getChange(event, True)
 
     def test_disable_at(self):
         "Test a pipeline will only report to the disabled trigger when failing"
 
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-disable-at.yaml')
+        self.updateConfigLayout('layout-disabled-at')
         self.sched.reconfigure(self.config)
 
-        self.assertEqual(3, self.sched.layout.pipelines['check'].disable_at)
+        tenant = self.sched.abide.tenants.get('openstack')
+        self.assertEqual(3, tenant.layout.pipelines['check'].disable_at)
         self.assertEqual(
-            0, self.sched.layout.pipelines['check']._consecutive_failures)
-        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+            0, tenant.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(tenant.layout.pipelines['check']._disabled)
 
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
@@ -4449,32 +4346,32 @@
         J = self.fake_gerrit.addFakeChange('org/project', 'master', 'J')
         K = self.fake_gerrit.addFakeChange('org/project', 'master', 'K')
 
-        self.worker.addFailTest('project-test1', A)
-        self.worker.addFailTest('project-test1', B)
+        self.launch_server.failJob('project-test1', A)
+        self.launch_server.failJob('project-test1', B)
         # Let C pass, resetting the counter
-        self.worker.addFailTest('project-test1', D)
-        self.worker.addFailTest('project-test1', E)
-        self.worker.addFailTest('project-test1', F)
-        self.worker.addFailTest('project-test1', G)
-        self.worker.addFailTest('project-test1', H)
+        self.launch_server.failJob('project-test1', D)
+        self.launch_server.failJob('project-test1', E)
+        self.launch_server.failJob('project-test1', F)
+        self.launch_server.failJob('project-test1', G)
+        self.launch_server.failJob('project-test1', H)
         # I also passes but should only report to the disabled reporters
-        self.worker.addFailTest('project-test1', J)
-        self.worker.addFailTest('project-test1', K)
+        self.launch_server.failJob('project-test1', J)
+        self.launch_server.failJob('project-test1', K)
 
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
         self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
 
         self.assertEqual(
-            2, self.sched.layout.pipelines['check']._consecutive_failures)
-        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+            2, tenant.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(tenant.layout.pipelines['check']._disabled)
 
         self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
 
         self.assertEqual(
-            0, self.sched.layout.pipelines['check']._consecutive_failures)
-        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+            0, tenant.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(tenant.layout.pipelines['check']._disabled)
 
         self.fake_gerrit.addEvent(D.getPatchsetCreatedEvent(1))
         self.fake_gerrit.addEvent(E.getPatchsetCreatedEvent(1))
@@ -4483,8 +4380,8 @@
 
         # We should be disabled now
         self.assertEqual(
-            3, self.sched.layout.pipelines['check']._consecutive_failures)
-        self.assertTrue(self.sched.layout.pipelines['check']._disabled)
+            3, tenant.layout.pipelines['check']._consecutive_failures)
+        self.assertTrue(tenant.layout.pipelines['check']._disabled)
 
         # We need to wait between each of these patches to make sure the
         # smtp messages come back in an expected order
@@ -4514,30 +4411,35 @@
         self.assertEqual(3, len(self.smtp_messages))
         self.assertEqual(0, len(G.messages))
         self.assertIn('Build failed.', self.smtp_messages[0]['body'])
-        self.assertIn('/7/1/check', self.smtp_messages[0]['body'])
+        self.assertIn(
+            'project-test1 https://server/job', self.smtp_messages[0]['body'])
         self.assertEqual(0, len(H.messages))
         self.assertIn('Build failed.', self.smtp_messages[1]['body'])
-        self.assertIn('/8/1/check', self.smtp_messages[1]['body'])
+        self.assertIn(
+            'project-test1 https://server/job', self.smtp_messages[1]['body'])
         self.assertEqual(0, len(I.messages))
         self.assertIn('Build succeeded.', self.smtp_messages[2]['body'])
-        self.assertIn('/9/1/check', self.smtp_messages[2]['body'])
+        self.assertIn(
+            'project-test1 https://server/job', self.smtp_messages[2]['body'])
 
         # Now reload the configuration (simulate a HUP) to check the pipeline
         # comes out of disabled
         self.sched.reconfigure(self.config)
 
-        self.assertEqual(3, self.sched.layout.pipelines['check'].disable_at)
+        tenant = self.sched.abide.tenants.get('openstack')
+
+        self.assertEqual(3, tenant.layout.pipelines['check'].disable_at)
         self.assertEqual(
-            0, self.sched.layout.pipelines['check']._consecutive_failures)
-        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+            0, tenant.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(tenant.layout.pipelines['check']._disabled)
 
         self.fake_gerrit.addEvent(J.getPatchsetCreatedEvent(1))
         self.fake_gerrit.addEvent(K.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
 
         self.assertEqual(
-            2, self.sched.layout.pipelines['check']._consecutive_failures)
-        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+            2, tenant.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(tenant.layout.pipelines['check']._disabled)
 
         # J and K went back to gerrit
         self.assertEqual(1, len(J.messages))
@@ -4547,70 +4449,399 @@
         # No more messages reported via smtp
         self.assertEqual(3, len(self.smtp_messages))
 
-    def test_success_pattern(self):
-        "Ensure bad build params are ignored"
+    def test_rerun_on_abort(self):
+        "Test that if a launch server fails to run a job, it is run again"
 
-        # Use SMTP reporter to grab the result message easier
-        self.init_repo("org/docs")
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-success-pattern.yaml')
+        self.launch_server.hold_jobs_in_build = True
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+
+        self.assertEqual(len(self.builds), 2)
+        self.builds[0].requeue = True
+        self.launch_server.release('.*-test*')
+        self.waitUntilSettled()
+
+        for x in range(3):
+            self.assertEqual(len(self.builds), 1,
+                             'len of builds at x=%d is wrong' % x)
+            self.builds[0].requeue = True
+            self.launch_server.release('.*-test1')
+            self.waitUntilSettled()
+
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
+        self.waitUntilSettled()
+        self.assertEqual(len(self.history), 6)
+        self.assertEqual(self.countJobResults(self.history, 'SUCCESS'), 2)
+        self.assertEqual(A.reported, 1)
+        self.assertIn('RETRY_LIMIT', A.messages[0])
+
+    def test_zookeeper_disconnect(self):
+        "Test that jobs are launched after a zookeeper disconnect"
+
+        self.fake_nodepool.paused = True
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+
+        self.zk.client.stop()
+        self.zk.client.start()
+        self.fake_nodepool.paused = False
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2)
+
+    def test_nodepool_failure(self):
+        "Test that jobs are reported after a nodepool failure"
+
+        self.fake_nodepool.paused = True
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+
+        req = self.fake_nodepool.getNodeRequests()[0]
+        self.fake_nodepool.addFailRequest(req)
+
+        self.fake_nodepool.paused = False
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 2)
+        self.assertIn('project-merge : NODE_FAILURE', A.messages[1])
+        self.assertIn('project-test1 : SKIPPED', A.messages[1])
+        self.assertIn('project-test2 : SKIPPED', A.messages[1])
+
+
+class TestDuplicatePipeline(ZuulTestCase):
+    tenant_config_file = 'config/duplicate-pipeline/main.yaml'
+
+    def test_duplicate_pipelines(self):
+        "Test that a change matching multiple pipelines works"
+
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getChangeRestoredEvent())
+        self.waitUntilSettled()
+
+        self.assertHistory([
+            dict(name='project-test1', result='SUCCESS', changes='1,1',
+                 pipeline='dup1'),
+            dict(name='project-test1', result='SUCCESS', changes='1,1',
+                 pipeline='dup2'),
+        ], ordered=False)
+
+        self.assertEqual(len(A.messages), 2)
+
+        if 'dup1' in A.messages[0]:
+            self.assertIn('dup1', A.messages[0])
+            self.assertNotIn('dup2', A.messages[0])
+            self.assertIn('project-test1', A.messages[0])
+            self.assertIn('dup2', A.messages[1])
+            self.assertNotIn('dup1', A.messages[1])
+            self.assertIn('project-test1', A.messages[1])
+        else:
+            self.assertIn('dup1', A.messages[1])
+            self.assertNotIn('dup2', A.messages[1])
+            self.assertIn('project-test1', A.messages[1])
+            self.assertIn('dup2', A.messages[0])
+            self.assertNotIn('dup1', A.messages[0])
+            self.assertIn('project-test1', A.messages[0])
+
+
+class TestSchedulerOneJobProject(ZuulTestCase):
+    tenant_config_file = 'config/one-job-project/main.yaml'
+
+    def test_one_job_project(self):
+        "Test that queueing works with one job"
+        A = self.fake_gerrit.addFakeChange('org/one-job-project',
+                                           'master', 'A')
+        B = self.fake_gerrit.addFakeChange('org/one-job-project',
+                                           'master', 'B')
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2)
+        self.assertEqual(B.data['status'], 'MERGED')
+        self.assertEqual(B.reported, 2)
+
+
+class TestSchedulerTemplatedProject(ZuulTestCase):
+    tenant_config_file = 'config/templated-project/main.yaml'
+
+    def test_job_from_templates_launched(self):
+        "Test whether a job generated via a template can be launched"
+
+        A = self.fake_gerrit.addFakeChange(
+            'org/templated-project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.assertEqual(self.getJobFromHistory('project-test1').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-test2').result,
+                         'SUCCESS')
+
+    def test_layered_templates(self):
+        "Test whether a job generated via a template can be launched"
+
+        A = self.fake_gerrit.addFakeChange(
+            'org/layered-project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.assertEqual(self.getJobFromHistory('project-test1').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-test2').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('layered-project-test3'
+                                                ).result, 'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('layered-project-test4'
+                                                ).result, 'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('layered-project-foo-test5'
+                                                ).result, 'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-test6').result,
+                         'SUCCESS')
+
+
+class TestSchedulerSuccessURL(ZuulTestCase):
+    tenant_config_file = 'config/success-url/main.yaml'
+
+    def test_success_url(self):
+        "Ensure bad build params are ignored"
         self.sched.reconfigure(self.config)
-        self.worker.hold_jobs_in_build = True
-        self.registerJobs()
+        self.init_repo('org/docs')
 
         A = self.fake_gerrit.addFakeChange('org/docs', 'master', 'A')
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
         self.waitUntilSettled()
 
+        # Both builds ran: docs-draft-test + docs-draft-test2
+        self.assertEqual(len(self.history), 2)
+
         # Grab build id
-        self.assertEqual(len(self.builds), 1)
-        uuid = self.builds[0].unique[:7]
+        for build in self.history:
+            if build.name == 'docs-draft-test':
+                uuid = build.uuid[:7]
+                break
 
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
-        self.waitUntilSettled()
-
-        self.assertEqual(len(self.smtp_messages), 1)
-        body = self.smtp_messages[0]['body'].splitlines()
+        # Two msgs: 'Starting...'  + results
+        self.assertEqual(len(self.smtp_messages), 2)
+        body = self.smtp_messages[1]['body'].splitlines()
         self.assertEqual('Build succeeded.', body[0])
 
         self.assertIn(
             '- docs-draft-test http://docs-draft.example.org/1/1/1/check/'
             'docs-draft-test/%s/publish-docs/' % uuid,
             body[2])
+
+        # NOTE: This default URL is currently hard-coded in launcher/server.py
         self.assertIn(
-            '- docs-draft-test2 https://server/job/docs-draft-test2/1/',
+            '- docs-draft-test2 https://server/job',
             body[3])
 
-    def test_rerun_on_abort(self):
-        "Test that if a worker fails to run a job, it is run again"
 
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-abort-attempts.yaml')
-        self.sched.reconfigure(self.config)
-        self.worker.hold_jobs_in_build = True
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+class TestSchedulerMerges(ZuulTestCase):
+    tenant_config_file = 'config/merges/main.yaml'
+
+    def _test_project_merge_mode(self, mode):
+        self.launch_server.keep_jobdir = False
+        project = 'org/project-%s' % mode
+        self.launch_server.hold_jobs_in_build = True
+        A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
+        B = self.fake_gerrit.addFakeChange(project, 'master', 'B')
+        C = self.fake_gerrit.addFakeChange(project, 'master', 'C')
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        self.worker.release('.*-merge')
+        build = self.builds[-1]
+        ref = self.getParameter(build, 'ZUUL_REF')
+
+        path = os.path.join(build.jobdir.src_root, project)
+        repo = git.Repo(path)
+        repo_messages = [c.message.strip() for c in repo.iter_commits(ref)]
+        repo_messages.reverse()
+
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
-        self.assertEqual(len(self.builds), 2)
-        self.builds[0].requeue = True
-        self.worker.release('.*-test*')
+        return repo_messages
+
+    def _test_merge(self, mode):
+        us_path = os.path.join(
+            self.upstream_root, 'org/project-%s' % mode)
+        expected_messages = [
+            'initial commit',
+            'add content from fixture',
+            # the intermediate commits order is nondeterministic
+            "Merge commit 'refs/changes/1/2/1' of %s into HEAD" % us_path,
+            "Merge commit 'refs/changes/1/3/1' of %s into HEAD" % us_path,
+        ]
+        result = self._test_project_merge_mode(mode)
+        self.assertEqual(result[:2], expected_messages[:2])
+        self.assertEqual(result[-2:], expected_messages[-2:])
+
+    def test_project_merge_mode_merge(self):
+        self._test_merge('merge')
+
+    def test_project_merge_mode_merge_resolve(self):
+        self._test_merge('merge-resolve')
+
+    def test_project_merge_mode_cherrypick(self):
+        expected_messages = [
+            'initial commit',
+            'add content from fixture',
+            'A-1',
+            'B-1',
+            'C-1']
+        result = self._test_project_merge_mode('cherry-pick')
+        self.assertEqual(result, expected_messages)
+
+    def test_merge_branch(self):
+        "Test that the right commits are on alternate branches"
+        self.create_branch('org/project-merge-branches', 'mp')
+
+        self.launch_server.hold_jobs_in_build = True
+        A = self.fake_gerrit.addFakeChange(
+            'org/project-merge-branches', 'mp', 'A')
+        B = self.fake_gerrit.addFakeChange(
+            'org/project-merge-branches', 'mp', 'B')
+        C = self.fake_gerrit.addFakeChange(
+            'org/project-merge-branches', 'mp', 'C')
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
 
-        for x in range(3):
-            self.assertEqual(len(self.builds), 1)
-            self.builds[0].requeue = True
-            self.worker.release('.*-test1')
-            self.waitUntilSettled()
-
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.release('.*-merge')
         self.waitUntilSettled()
-        self.assertEqual(len(self.history), 6)
-        self.assertEqual(self.countJobResults(self.history, 'SUCCESS'), 2)
-        self.assertEqual(A.reported, 1)
-        self.assertIn('RETRY_LIMIT', A.messages[0])
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+
+        build = self.builds[-1]
+        self.assertEqual(self.getParameter(build, 'ZUUL_BRANCH'), 'mp')
+        ref = self.getParameter(build, 'ZUUL_REF')
+        path = os.path.join(
+            build.jobdir.src_root, 'org/project-merge-branches')
+        repo = git.Repo(path)
+
+        repo_messages = [c.message.strip() for c in repo.iter_commits(ref)]
+        repo_messages.reverse()
+        correct_messages = [
+            'initial commit',
+            'add content from fixture',
+            'mp commit',
+            'A-1', 'B-1', 'C-1']
+        self.assertEqual(repo_messages, correct_messages)
+
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
+        self.waitUntilSettled()
+
+    def test_merge_multi_branch(self):
+        "Test that dependent changes on multiple branches are merged"
+        self.create_branch('org/project-merge-branches', 'mp')
+
+        self.launch_server.hold_jobs_in_build = True
+        A = self.fake_gerrit.addFakeChange(
+            'org/project-merge-branches', 'master', 'A')
+        B = self.fake_gerrit.addFakeChange(
+            'org/project-merge-branches', 'mp', 'B')
+        C = self.fake_gerrit.addFakeChange(
+            'org/project-merge-branches', 'master', 'C')
+        A.addApproval('code-review', 2)
+        B.addApproval('code-review', 2)
+        C.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.waitUntilSettled()
+
+        job_A = None
+        for job in self.builds:
+            if 'project-merge' in job.name:
+                job_A = job
+        ref_A = self.getParameter(job_A, 'ZUUL_REF')
+        commit_A = self.getParameter(job_A, 'ZUUL_COMMIT')
+        self.log.debug("Got Zuul ref for change A: %s" % ref_A)
+        self.log.debug("Got Zuul commit for change A: %s" % commit_A)
+
+        path = os.path.join(
+            job_A.jobdir.src_root, "org/project-merge-branches")
+        repo = git.Repo(path)
+        repo_messages = [c.message.strip()
+                         for c in repo.iter_commits(ref_A)]
+        repo_messages.reverse()
+        correct_messages = [
+            'initial commit', 'add content from fixture', 'A-1']
+        self.assertEqual(repo_messages, correct_messages)
+
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+
+        job_B = None
+        for job in self.builds:
+            if 'project-merge' in job.name:
+                job_B = job
+        ref_B = self.getParameter(job_B, 'ZUUL_REF')
+        commit_B = self.getParameter(job_B, 'ZUUL_COMMIT')
+        self.log.debug("Got Zuul ref for change B: %s" % ref_B)
+        self.log.debug("Got Zuul commit for change B: %s" % commit_B)
+
+        path = os.path.join(
+            job_B.jobdir.src_root, "org/project-merge-branches")
+        repo = git.Repo(path)
+        repo_messages = [c.message.strip()
+                         for c in repo.iter_commits(ref_B)]
+        repo_messages.reverse()
+        correct_messages = [
+            'initial commit', 'add content from fixture', 'mp commit', 'B-1']
+        self.assertEqual(repo_messages, correct_messages)
+
+        self.launch_server.release('.*-merge')
+        self.waitUntilSettled()
+
+        job_C = None
+        for job in self.builds:
+            if 'project-merge' in job.name:
+                job_C = job
+        ref_C = self.getParameter(job_C, 'ZUUL_REF')
+        commit_C = self.getParameter(job_C, 'ZUUL_COMMIT')
+        self.log.debug("Got Zuul ref for change C: %s" % ref_C)
+        self.log.debug("Got Zuul commit for change C: %s" % commit_C)
+        path = os.path.join(
+            job_C.jobdir.src_root, "org/project-merge-branches")
+        repo = git.Repo(path)
+        repo_messages = [c.message.strip()
+                         for c in repo.iter_commits(ref_C)]
+
+        repo_messages.reverse()
+        correct_messages = [
+            'initial commit', 'add content from fixture',
+            'A-1', 'C-1']
+        # Ensure the right commits are in the history for this ref
+        self.assertEqual(repo_messages, correct_messages)
+
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
+        self.waitUntilSettled()
diff --git a/tests/test_stack_dump.py b/tests/unit/test_stack_dump.py
similarity index 100%
rename from tests/test_stack_dump.py
rename to tests/unit/test_stack_dump.py
diff --git a/tests/unit/test_v3.py b/tests/unit/test_v3.py
new file mode 100644
index 0000000..cf88265
--- /dev/null
+++ b/tests/unit/test_v3.py
@@ -0,0 +1,241 @@
+#!/usr/bin/env python
+
+# Copyright 2012 Hewlett-Packard Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import textwrap
+
+from tests.base import AnsibleZuulTestCase, ZuulTestCase
+
+
+class TestMultipleTenants(AnsibleZuulTestCase):
+    # A temporary class to hold new tests while others are disabled
+
+    tenant_config_file = 'config/multi-tenant/main.yaml'
+
+    def test_multiple_tenants(self):
+        A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertEqual(self.getJobFromHistory('project1-test1').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('python27').result,
+                         'SUCCESS')
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and success")
+        self.assertIn('tenant-one-gate', A.messages[1],
+                      "A should transit tenant-one gate")
+        self.assertNotIn('tenant-two-gate', A.messages[1],
+                         "A should *not* transit tenant-two gate")
+
+        B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertEqual(self.getJobFromHistory('python27',
+                                                'org/project2').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project2-test1').result,
+                         'SUCCESS')
+        self.assertEqual(B.data['status'], 'MERGED')
+        self.assertEqual(B.reported, 2,
+                         "B should report start and success")
+        self.assertIn('tenant-two-gate', B.messages[1],
+                      "B should transit tenant-two gate")
+        self.assertNotIn('tenant-one-gate', B.messages[1],
+                         "B should *not* transit tenant-one gate")
+
+        self.assertEqual(A.reported, 2, "Activity in tenant two should"
+                         "not affect tenant one")
+
+
+class TestInRepoConfig(ZuulTestCase):
+    # A temporary class to hold new tests while others are disabled
+
+    tenant_config_file = 'config/in-repo/main.yaml'
+
+    def test_in_repo_config(self):
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertEqual(self.getJobFromHistory('project-test1').result,
+                         'SUCCESS')
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and success")
+        self.assertIn('tenant-one-gate', A.messages[1],
+                      "A should transit tenant-one gate")
+
+    def test_dynamic_config(self):
+        in_repo_conf = textwrap.dedent(
+            """
+            - job:
+                name: project-test2
+
+            - project:
+                name: org/project
+                tenant-one-gate:
+                  jobs:
+                    - project-test2
+            """)
+
+        in_repo_playbook = textwrap.dedent(
+            """
+            - hosts: all
+              tasks: []
+            """)
+
+        file_dict = {'.zuul.yaml': in_repo_conf,
+                     'playbooks/project-test2.yaml': in_repo_playbook}
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A',
+                                           files=file_dict)
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and success")
+        self.assertIn('tenant-one-gate', A.messages[1],
+                      "A should transit tenant-one gate")
+        self.assertHistory([
+            dict(name='project-test2', result='SUCCESS', changes='1,1')])
+
+        self.fake_gerrit.addEvent(A.getChangeMergedEvent())
+
+        # Now that the config change is landed, it should be live for
+        # subsequent changes.
+        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertEqual(self.getJobFromHistory('project-test2').result,
+                         'SUCCESS')
+        self.assertHistory([
+            dict(name='project-test2', result='SUCCESS', changes='1,1'),
+            dict(name='project-test2', result='SUCCESS', changes='2,1')])
+
+    def test_in_repo_branch(self):
+        in_repo_conf = textwrap.dedent(
+            """
+            - job:
+                name: project-test2
+
+            - project:
+                name: org/project
+                tenant-one-gate:
+                  jobs:
+                    - project-test2
+            """)
+
+        in_repo_playbook = textwrap.dedent(
+            """
+            - hosts: all
+              tasks: []
+            """)
+
+        file_dict = {'.zuul.yaml': in_repo_conf,
+                     'playbooks/project-test2.yaml': in_repo_playbook}
+        self.create_branch('org/project', 'stable')
+        A = self.fake_gerrit.addFakeChange('org/project', 'stable', 'A',
+                                           files=file_dict)
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and success")
+        self.assertIn('tenant-one-gate', A.messages[1],
+                      "A should transit tenant-one gate")
+        self.assertHistory([
+            dict(name='project-test2', result='SUCCESS', changes='1,1')])
+        self.fake_gerrit.addEvent(A.getChangeMergedEvent())
+
+        # The config change should not affect master.
+        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertHistory([
+            dict(name='project-test2', result='SUCCESS', changes='1,1'),
+            dict(name='project-test1', result='SUCCESS', changes='2,1')])
+
+        # The config change should be live for further changes on
+        # stable.
+        C = self.fake_gerrit.addFakeChange('org/project', 'stable', 'C')
+        C.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.waitUntilSettled()
+        self.assertHistory([
+            dict(name='project-test2', result='SUCCESS', changes='1,1'),
+            dict(name='project-test1', result='SUCCESS', changes='2,1'),
+            dict(name='project-test2', result='SUCCESS', changes='3,1')])
+
+    def test_dynamic_syntax_error(self):
+        in_repo_conf = textwrap.dedent(
+            """
+            - job:
+                name: project-test2
+                foo: error
+            """)
+
+        file_dict = {'.zuul.yaml': in_repo_conf}
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A',
+                                           files=file_dict)
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and failure")
+        self.assertIn('syntax error', A.messages[1],
+                      "A should have a syntax error reported")
+
+
+class TestAnsible(AnsibleZuulTestCase):
+    # A temporary class to hold new tests while others are disabled
+
+    tenant_config_file = 'config/ansible/main.yaml'
+
+    def test_playbook(self):
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        build = self.getJobFromHistory('timeout')
+        self.assertEqual(build.result, 'ABORTED')
+        build = self.getJobFromHistory('faillocal')
+        self.assertEqual(build.result, 'FAILURE')
+        build = self.getJobFromHistory('python27')
+        self.assertEqual(build.result, 'SUCCESS')
+        flag_path = os.path.join(self.test_root, build.uuid + '.flag')
+        self.assertTrue(os.path.exists(flag_path))
+        copied_path = os.path.join(self.test_root, build.uuid +
+                                   '.copied')
+        self.assertTrue(os.path.exists(copied_path))
+        failed_path = os.path.join(self.test_root, build.uuid +
+                                   '.failed')
+        self.assertFalse(os.path.exists(failed_path))
+        pre_flag_path = os.path.join(self.test_root, build.uuid +
+                                     '.pre.flag')
+        self.assertTrue(os.path.exists(pre_flag_path))
+        post_flag_path = os.path.join(self.test_root, build.uuid +
+                                      '.post.flag')
+        self.assertTrue(os.path.exists(post_flag_path))
+        bare_role_flag_path = os.path.join(self.test_root,
+                                           build.uuid + '.bare-role.flag')
+        self.assertTrue(os.path.exists(bare_role_flag_path))
diff --git a/tests/test_webapp.py b/tests/unit/test_webapp.py
similarity index 75%
rename from tests/test_webapp.py
rename to tests/unit/test_webapp.py
index 94f097a..2211d1b 100644
--- a/tests/test_webapp.py
+++ b/tests/unit/test_webapp.py
@@ -23,30 +23,31 @@
 
 
 class TestWebapp(ZuulTestCase):
-
-    def _cleanup(self):
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
-        self.waitUntilSettled()
+    tenant_config_file = 'config/single-tenant/main.yaml'
 
     def setUp(self):
         super(TestWebapp, self).setUp()
-        self.addCleanup(self._cleanup)
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
-        B.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        B.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
         self.waitUntilSettled()
         self.port = self.webapp.server.socket.getsockname()[1]
 
+    def tearDown(self):
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
+        self.waitUntilSettled()
+        super(TestWebapp, self).tearDown()
+
     def test_webapp_status(self):
         "Test that we can filter to only certain changes in the webapp."
 
         req = urllib.request.Request(
-            "http://localhost:%s/status" % self.port)
+            "http://localhost:%s/tenant-one/status" % self.port)
         f = urllib.request.urlopen(req)
         data = json.loads(f.read())
 
@@ -55,7 +56,7 @@
     def test_webapp_status_compat(self):
         # testing compat with status.json
         req = urllib.request.Request(
-            "http://localhost:%s/status.json" % self.port)
+            "http://localhost:%s/tenant-one/status.json" % self.port)
         f = urllib.request.urlopen(req)
         data = json.loads(f.read())
 
@@ -70,7 +71,7 @@
     def test_webapp_find_change(self):
         # can we filter by change id
         req = urllib.request.Request(
-            "http://localhost:%s/status/change/1,1" % self.port)
+            "http://localhost:%s/tenant-one/status/change/1,1" % self.port)
         f = urllib.request.urlopen(req)
         data = json.loads(f.read())
 
@@ -78,7 +79,7 @@
         self.assertEqual("org/project", data[0]['project'])
 
         req = urllib.request.Request(
-            "http://localhost:%s/status/change/2,1" % self.port)
+            "http://localhost:%s/tenant-one/status/change/2,1" % self.port)
         f = urllib.request.urlopen(req)
         data = json.loads(f.read())
 
diff --git a/tests/test_zuultrigger.py b/tests/unit/test_zuultrigger.py
similarity index 75%
rename from tests/test_zuultrigger.py
rename to tests/unit/test_zuultrigger.py
index 0d52fc9..5d9c6e0 100644
--- a/tests/test_zuultrigger.py
+++ b/tests/unit/test_zuultrigger.py
@@ -14,51 +14,40 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
-import logging
-
 from tests.base import ZuulTestCase
 
-logging.basicConfig(level=logging.DEBUG,
-                    format='%(asctime)s %(name)-32s '
-                    '%(levelname)-8s %(message)s')
 
-
-class TestZuulTrigger(ZuulTestCase):
-    """Test Zuul Trigger"""
+class TestZuulTriggerParentChangeEnqueued(ZuulTestCase):
+    tenant_config_file = 'config/zuultrigger/parent-change-enqueued/main.yaml'
 
     def test_zuul_trigger_parent_change_enqueued(self):
         "Test Zuul trigger event: parent-change-enqueued"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-zuultrigger-enqueued.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
         # This test has the following three changes:
         # B1 -> A; B2 -> A
         # When A is enqueued in the gate, B1 and B2 should both attempt
         # to be enqueued in both pipelines.  B1 should end up in check
         # and B2 in gate because of differing pipeline requirements.
-        self.worker.hold_jobs_in_build = True
+        self.launch_server.hold_jobs_in_build = True
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         B1 = self.fake_gerrit.addFakeChange('org/project', 'master', 'B1')
         B2 = self.fake_gerrit.addFakeChange('org/project', 'master', 'B2')
-        A.addApproval('CRVW', 2)
-        B1.addApproval('CRVW', 2)
-        B2.addApproval('CRVW', 2)
-        A.addApproval('VRFY', 1)    # required by gate
-        B1.addApproval('VRFY', -1)  # should go to check
-        B2.addApproval('VRFY', 1)   # should go to gate
-        B1.addApproval('APRV', 1)
-        B2.addApproval('APRV', 1)
+        A.addApproval('code-review', 2)
+        B1.addApproval('code-review', 2)
+        B2.addApproval('code-review', 2)
+        A.addApproval('verified', 1)    # required by gate
+        B1.addApproval('verified', -1)  # should go to check
+        B2.addApproval('verified', 1)   # should go to gate
+        B1.addApproval('approved', 1)
+        B2.addApproval('approved', 1)
         B1.setDependsOn(A, 1)
         B2.setDependsOn(A, 1)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         # Jobs are being held in build to make sure that 3,1 has time
         # to enqueue behind 1,1 so that the test is more
         # deterministic.
         self.waitUntilSettled()
-        self.worker.hold_jobs_in_build = False
-        self.worker.release()
+        self.launch_server.hold_jobs_in_build = False
+        self.launch_server.release()
         self.waitUntilSettled()
 
         self.assertEqual(len(self.history), 3)
@@ -72,13 +61,15 @@
             else:
                 raise Exception("Unknown job")
 
-    def test_zuul_trigger_project_change_merged(self):
-        "Test Zuul trigger event: project-change-merged"
-        self.config.set('zuul', 'layout_config',
-                        'tests/fixtures/layout-zuultrigger-merged.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
 
+class TestZuulTriggerProjectChangeMerged(ZuulTestCase):
+
+    def setUp(self):
+        self.skip("Disabled because v3 noop job does not perform merge")
+
+    tenant_config_file = 'config/zuultrigger/project-change-merged/main.yaml'
+
+    def test_zuul_trigger_project_change_merged(self):
         # This test has the following three changes:
         # A, B, C;  B conflicts with A, but C does not.
         # When A is merged, B and C should be checked for conflicts,
@@ -90,12 +81,12 @@
         C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
         D = self.fake_gerrit.addFakeChange('org/project', 'master', 'D')
         E = self.fake_gerrit.addFakeChange('org/project', 'master', 'E')
-        A.addPatchset(['conflict'])
-        B.addPatchset(['conflict'])
-        D.addPatchset(['conflict2'])
-        E.addPatchset(['conflict2'])
-        A.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        A.addPatchset({'conflict': 'foo'})
+        B.addPatchset({'conflict': 'bar'})
+        D.addPatchset({'conflict2': 'foo'})
+        E.addPatchset({'conflict2': 'bar'})
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(len(self.history), 1)
@@ -121,8 +112,8 @@
         # configuration.
         self.sched.reconfigure(self.config)
 
-        D.addApproval('CRVW', 2)
-        self.fake_gerrit.addEvent(D.addApproval('APRV', 1))
+        D.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(D.addApproval('approved', 1))
         self.waitUntilSettled()
 
         self.assertEqual(len(self.history), 2)
diff --git a/tools/nodepool-integration-setup.sh b/tools/nodepool-integration-setup.sh
new file mode 100755
index 0000000..c02a016
--- /dev/null
+++ b/tools/nodepool-integration-setup.sh
@@ -0,0 +1,12 @@
+#!/bin/bash -xe
+
+/usr/zuul-env/bin/zuul-cloner --workspace /tmp --cache-dir /opt/git \
+    git://git.openstack.org openstack-infra/nodepool
+
+ln -s /tmp/nodepool/log $WORKSPACE/logs
+
+cd /tmp/openstack-infra/nodepool
+/usr/local/jenkins/slave_scripts/install-distro-packages.sh
+sudo pip install .
+
+bash -xe ./tools/zuul-nodepool-integration/start.sh
diff --git a/tools/update-storyboard.py b/tools/update-storyboard.py
new file mode 100644
index 0000000..12e6916
--- /dev/null
+++ b/tools/update-storyboard.py
@@ -0,0 +1,100 @@
+#!/usr/bin/env python
+# Copyright (C) 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+# This script updates the Zuul v3 Storyboard.  It uses a .boartty.yaml
+# file to get credential information.
+
+import requests
+import boartty.config
+import boartty.sync
+import logging  # noqa
+from pprint import pprint as p  # noqa
+
+
+class App(object):
+    pass
+
+
+def get_tasks(sync):
+    task_list = []
+    for story in sync.get('/v1/stories?tags=zuulv3'):
+        print("Story %s: %s" % (story['id'], story['title']))
+        for task in sync.get('/v1/stories/%s/tasks' % (story['id'])):
+            print("  %s" % (task['title'],))
+            task_list.append(task)
+    return task_list
+
+
+def task_in_lane(task, lane):
+    for item in lane['worklist']['items']:
+        if 'task' in item and item['task']['id'] == task['id']:
+            return True
+    return False
+
+
+def add_task(sync, task, lane):
+    print("Add task %s to %s" % (task['id'], lane['worklist']['id']))
+    r = sync.post('v1/worklists/%s/items/' % lane['worklist']['id'],
+                  dict(item_id=task['id'],
+                       item_type='task',
+                       list_position=0))
+    print(r)
+
+
+def remove_task(sync, task, lane):
+    print("Remove task %s from %s" % (task['id'], lane['worklist']['id']))
+    for item in lane['worklist']['items']:
+        if 'task' in item and item['task']['id'] == task['id']:
+            r = sync.delete('v1/worklists/%s/items/' % lane['worklist']['id'],
+                            dict(item_id=item['id']))
+            print(r)
+
+
+MAP = {
+    'todo': ['New', 'Backlog', 'Todo'],
+    'inprogress': ['In Progress', 'Blocked'],
+    'review': ['In Progress', 'Blocked'],
+    'merged': None,
+    'invalid': None,
+}
+
+
+def main():
+    requests.packages.urllib3.disable_warnings()
+    # logging.basicConfig(level=logging.DEBUG)
+    app = App()
+    app.config = boartty.config.Config('openstack')
+    sync = boartty.sync.Sync(app, False)
+    board = sync.get('v1/boards/41')
+    tasks = get_tasks(sync)
+
+    lanes = dict()
+    for lane in board['lanes']:
+        lanes[lane['worklist']['title']] = lane
+
+    for task in tasks:
+        ok_lanes = MAP[task['status']]
+        task_found = False
+        for lane_name, lane in lanes.items():
+            if task_in_lane(task, lane):
+                if ok_lanes and lane_name in ok_lanes:
+                    task_found = True
+                else:
+                    remove_task(sync, task, lane)
+        if ok_lanes and not task_found:
+            add_task(sync, task, lanes[ok_lanes[0]])
+
+if __name__ == '__main__':
+    main()
diff --git a/tox.ini b/tox.ini
index 06ccbcd..9c0d949 100644
--- a/tox.ini
+++ b/tox.ini
@@ -8,9 +8,8 @@
 setenv = STATSD_HOST=127.0.0.1
          STATSD_PORT=8125
          VIRTUAL_ENV={envdir}
-         OS_TEST_TIMEOUT=30
-         OS_LOG_DEFAULTS={env:OS_LOG_DEFAULTS:gear.Server=INFO,gear.Client=INFO}
-passenv = ZUUL_TEST_ROOT
+         OS_TEST_TIMEOUT=90
+passenv = ZUUL_TEST_ROOT OS_STDOUT_CAPTURE OS_STDERR_CAPTURE OS_LOG_CAPTURE OS_LOG_DEFAULTS
 usedevelop = True
 install_command = pip install {opts} {packages}
 deps = -r{toxinidir}/requirements.txt
@@ -44,6 +43,11 @@
 [testenv:validate-layout]
 commands = zuul-server -c etc/zuul.conf-sample -t -l {posargs}
 
+[testenv:nodepool]
+setenv =
+   OS_TEST_PATH = ./tests/nodepool
+commands = python setup.py testr --slowest --testr-args='--concurrency=1 {posargs}'
+
 [flake8]
 # These are ignored intentionally in openstack-infra projects;
 # please don't submit patches that solely correct them or enable them.
diff --git a/tests/cmd/__init__.py b/zuul/ansible/action/__init__.py
similarity index 100%
copy from tests/cmd/__init__.py
copy to zuul/ansible/action/__init__.py
diff --git a/zuul/ansible/action/add_host.py b/zuul/ansible/action/add_host.py
new file mode 100644
index 0000000..d4b24aa
--- /dev/null
+++ b/zuul/ansible/action/add_host.py
@@ -0,0 +1,26 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+from zuul.ansible import paths
+add_host = paths._import_ansible_action_plugin("add_host")
+
+
+class ActionModule(add_host.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        return dict(
+            failed=True,
+            msg="Adding hosts to the inventory is prohibited")
diff --git a/zuul/ansible/action/asa_config.py b/zuul/ansible/action/asa_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/asa_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/asa_template.py b/zuul/ansible/action/asa_template.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/asa_template.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/assemble.py b/zuul/ansible/action/assemble.py
new file mode 100644
index 0000000..2cc7eb7
--- /dev/null
+++ b/zuul/ansible/action/assemble.py
@@ -0,0 +1,30 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+assemble = paths._import_ansible_action_plugin("assemble")
+
+
+class ActionModule(assemble.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source = self._task.args.get('src', None)
+        remote_src = self._task.args.get('remote_src', False)
+
+        if not remote_src and not paths._is_safe_path(source):
+            return paths._fail_dict(source)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/copy.py b/zuul/ansible/action/copy.py
new file mode 100644
index 0000000..bb54430
--- /dev/null
+++ b/zuul/ansible/action/copy.py
@@ -0,0 +1,30 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+copy = paths._import_ansible_action_plugin("copy")
+
+
+class ActionModule(copy.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source = self._task.args.get('src', None)
+        remote_src = self._task.args.get('remote_src', False)
+
+        if not remote_src and not paths._is_safe_path(source):
+            return paths._fail_dict(source)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/dellos10_config.py b/zuul/ansible/action/dellos10_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/dellos10_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/dellos6_config.py b/zuul/ansible/action/dellos6_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/dellos6_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/dellos9_config.py b/zuul/ansible/action/dellos9_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/dellos9_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/eos_config.py b/zuul/ansible/action/eos_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/eos_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/eos_template.py b/zuul/ansible/action/eos_template.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/eos_template.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/fetch.py b/zuul/ansible/action/fetch.py
new file mode 100644
index 0000000..170b655
--- /dev/null
+++ b/zuul/ansible/action/fetch.py
@@ -0,0 +1,29 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+fetch = paths._import_ansible_action_plugin("fetch")
+
+
+class ActionModule(fetch.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        dest = self._task.args.get('dest', None)
+
+        if dest and not paths._is_safe_path(dest):
+            return paths._fail_dict(dest)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/include_vars.py b/zuul/ansible/action/include_vars.py
new file mode 100644
index 0000000..5bc1d76
--- /dev/null
+++ b/zuul/ansible/action/include_vars.py
@@ -0,0 +1,31 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+include_vars = paths._import_ansible_action_plugin("include_vars")
+
+
+class ActionModule(include_vars.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source_dir = self._task.args.get('dir', None)
+        source_file = self._task.args.get('file', None)
+
+        for fileloc in (source_dir, source_file):
+            if fileloc and not paths._is_safe_path(fileloc):
+                return paths._fail_dict(fileloc)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/ios_config.py b/zuul/ansible/action/ios_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/ios_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/ios_template.py b/zuul/ansible/action/ios_template.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/ios_template.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/iosxr_config.py b/zuul/ansible/action/iosxr_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/iosxr_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/iosxr_template.py b/zuul/ansible/action/iosxr_template.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/iosxr_template.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/junos_config.py b/zuul/ansible/action/junos_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/junos_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/junos_template.py b/zuul/ansible/action/junos_template.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/junos_template.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/net_config.py b/zuul/ansible/action/net_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/net_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/net_template.py b/zuul/ansible/action/net_template.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/net_template.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/network.py b/zuul/ansible/action/network.py
new file mode 100644
index 0000000..41fc560
--- /dev/null
+++ b/zuul/ansible/action/network.py
@@ -0,0 +1,25 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+network = paths._import_ansible_action_plugin("network")
+
+
+class ActionModule(network.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        return dict(failed=True, msg='Use of network modules is prohibited')
diff --git a/zuul/ansible/action/normal.py b/zuul/ansible/action/normal.py
new file mode 100644
index 0000000..b18cb51
--- /dev/null
+++ b/zuul/ansible/action/normal.py
@@ -0,0 +1,33 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+from zuul.ansible import paths
+normal = paths._import_ansible_action_plugin('normal')
+
+
+class ActionModule(normal.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        if (self._play_context.connection == 'local'
+                or self._play_context.remote_addr == 'localhost'
+                or self._play_context.remote_addr.startswith('127.')
+                or self._task.delegate_to == 'localhost'
+                or (self._task.delegate_to
+                    and self._task.delegate_to.startswtih('127.'))):
+            return dict(
+                failed=True,
+                msg="Executing local code is prohibited")
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/nxos_config.py b/zuul/ansible/action/nxos_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/nxos_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/nxos_template.py b/zuul/ansible/action/nxos_template.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/nxos_template.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/ops_config.py b/zuul/ansible/action/ops_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/ops_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/ops_template.py b/zuul/ansible/action/ops_template.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/ops_template.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/patch.py b/zuul/ansible/action/patch.py
new file mode 100644
index 0000000..0b43c82
--- /dev/null
+++ b/zuul/ansible/action/patch.py
@@ -0,0 +1,30 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+patch = paths._import_ansible_action_plugin("patch")
+
+
+class ActionModule(patch.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source = self._task.args.get('src', None)
+        remote_src = self._task.args.get('remote_src', False)
+
+        if not remote_src and not paths._is_safe_path(source):
+            return paths._fail_dict(source)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/script.py b/zuul/ansible/action/script.py
new file mode 100644
index 0000000..c95d357
--- /dev/null
+++ b/zuul/ansible/action/script.py
@@ -0,0 +1,34 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+script = paths._import_ansible_action_plugin("script")
+
+
+class ActionModule(script.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        # the script name is the first item in the raw params, so we split it
+        # out now so we know the file name we need to transfer to the remote,
+        # and everything else is an argument to the script which we need later
+        # to append to the remote command
+        parts = self._task.args.get('_raw_params', '').strip().split()
+        source = parts[0]
+
+        if not paths._is_safe_path(source):
+            return paths._fail_dict(source)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/sros_config.py b/zuul/ansible/action/sros_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/sros_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/synchronize.py b/zuul/ansible/action/synchronize.py
new file mode 100644
index 0000000..75fd45f
--- /dev/null
+++ b/zuul/ansible/action/synchronize.py
@@ -0,0 +1,38 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+synchronize = paths._import_ansible_action_plugin("synchronize")
+
+
+class ActionModule(synchronize.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source = self._task.args.get('src', None)
+        dest = self._task.args.get('dest', None)
+        mode = self._task.args.get('mode', 'push')
+
+        if 'rsync_opts' not in self._task.args:
+            self._task.args['rsync_opts'] = []
+        if '--safe-links' not in self._task.args['rsync_opts']:
+            self._task.args['rsync_opts'].append('--safe-links')
+
+        if mode == 'push' and not paths._is_safe_path(source):
+            return paths._fail_dict(source, prefix='Syncing files from')
+        if mode == 'pull' and not paths._is_safe_path(dest):
+            return paths._fail_dict(dest, prefix='Syncing files to')
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/template.py b/zuul/ansible/action/template.py
new file mode 100644
index 0000000..c6df3d8
--- /dev/null
+++ b/zuul/ansible/action/template.py
@@ -0,0 +1,29 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+template = paths._import_ansible_action_plugin("template")
+
+
+class ActionModule(template.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source = self._task.args.get('src', None)
+
+        if not paths._is_safe_path(source):
+            return paths._fail_dict(source)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/unarchive.py b/zuul/ansible/action/unarchive.py
new file mode 100644
index 0000000..c78c331
--- /dev/null
+++ b/zuul/ansible/action/unarchive.py
@@ -0,0 +1,30 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+unarchive = paths._import_ansible_action_plugin("unarchive")
+
+
+class ActionModule(unarchive.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source = self._task.args.get('src', None)
+        remote_src = self._task.args.get('remote_src', False)
+
+        if not remote_src and not paths._is_safe_path(source):
+            return paths._fail_dict(source)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/vyos_config.py b/zuul/ansible/action/vyos_config.py
new file mode 120000
index 0000000..7a739ba
--- /dev/null
+++ b/zuul/ansible/action/vyos_config.py
@@ -0,0 +1 @@
+network.py
\ No newline at end of file
diff --git a/zuul/ansible/action/win_copy.py b/zuul/ansible/action/win_copy.py
new file mode 100644
index 0000000..2751585
--- /dev/null
+++ b/zuul/ansible/action/win_copy.py
@@ -0,0 +1,30 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+win_copy = paths._import_ansible_action_plugin("win_copy")
+
+
+class ActionModule(win_copy.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source = self._task.args.get('src', None)
+        remote_src = self._task.args.get('remote_src', False)
+
+        if not remote_src and not paths._is_safe_path(source):
+            return paths._fail_dict(source)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/zuul/ansible/action/win_template.py b/zuul/ansible/action/win_template.py
new file mode 100644
index 0000000..7a357f9
--- /dev/null
+++ b/zuul/ansible/action/win_template.py
@@ -0,0 +1,30 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+
+from zuul.ansible import paths
+win_template = paths._import_ansible_action_plugin("win_template")
+
+
+class ActionModule(win_template.ActionModule):
+
+    def run(self, tmp=None, task_vars=None):
+
+        source = self._task.args.get('src', None)
+        remote_src = self._task.args.get('remote_src', False)
+
+        if not remote_src and not paths._is_safe_path(source):
+            return paths._fail_dict(source)
+        return super(ActionModule, self).run(tmp, task_vars)
diff --git a/tests/cmd/__init__.py b/zuul/ansible/callback/__init__.py
similarity index 100%
copy from tests/cmd/__init__.py
copy to zuul/ansible/callback/__init__.py
diff --git a/zuul/ansible/callback/zuul_stream.py b/zuul/ansible/callback/zuul_stream.py
new file mode 100644
index 0000000..9b8bccd
--- /dev/null
+++ b/zuul/ansible/callback/zuul_stream.py
@@ -0,0 +1,96 @@
+# Copyright 2017 Red Hat, Inc.
+#
+# Zuul is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# Zuul is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.
+
+import os
+import multiprocessing
+import socket
+import time
+
+from ansible.plugins.callback import default
+
+LOG_STREAM_PORT = 19885
+
+
+def linesplit(socket):
+    buff = socket.recv(4096)
+    buffering = True
+    while buffering:
+        if "\n" in buff:
+            (line, buff) = buff.split("\n", 1)
+            yield line + "\n"
+        else:
+            more = socket.recv(4096)
+            if not more:
+                buffering = False
+            else:
+                buff += more
+    if buff:
+        yield buff
+
+
+class CallbackModule(default.CallbackModule):
+
+    '''
+    This is the Zuul streaming callback. It's based on the default
+    callback plugin, but streams results from shell commands.
+    '''
+
+    CALLBACK_VERSION = 2.0
+    CALLBACK_TYPE = 'stdout'
+    CALLBACK_NAME = 'zuul_stream'
+
+    def __init__(self):
+
+        super(CallbackModule, self).__init__()
+        self._task = None
+        self._daemon_running = False
+        self._daemon_stamp = 'daemon-stamp-%s'
+        self._host_dict = {}
+
+    def _read_log(self, host, ip):
+        self._display.display("[%s] starting to log" % host)
+        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        while True:
+            try:
+                s.connect((ip, LOG_STREAM_PORT))
+            except Exception:
+                self._display.display("[%s] Waiting on logger" % host)
+                time.sleep(0.1)
+                continue
+            for line in linesplit(s):
+                self._display.display("[%s] %s " % (host, line.strip()))
+
+    def v2_playbook_on_play_start(self, play):
+        self._play = play
+        super(CallbackModule, self).v2_playbook_on_play_start(play)
+
+    def v2_playbook_on_task_start(self, task, is_conditional):
+        self._task = task
+
+        if self._play.strategy != 'free':
+            self._print_task_banner(task)
+        if task.action == 'command':
+            play_vars = self._play._variable_manager._hostvars
+            for host in self._play.hosts:
+                ip = play_vars[host]['ansible_host']
+                daemon_stamp = self._daemon_stamp % host
+                if not os.path.exists(daemon_stamp):
+                    self._host_dict[host] = ip
+                    # Touch stamp file
+                    open(daemon_stamp, 'w').close()
+                    p = multiprocessing.Process(
+                        target=self._read_log, args=(host, ip))
+                    p.daemon = True
+                    p.start()
diff --git a/zuul/ansible/library/command.py b/zuul/ansible/library/command.py
index 6390322..328ae7b 100644
--- a/zuul/ansible/library/command.py
+++ b/zuul/ansible/library/command.py
@@ -121,12 +121,13 @@
 from ast import literal_eval
 
 
+LOG_STREAM_FILE = '/tmp/console.log'
 PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
 
 
 class Console(object):
     def __enter__(self):
-        self.logfile = open('/tmp/console.html', 'a', 0)
+        self.logfile = open(LOG_STREAM_FILE, 'a', 0)
         return self
 
     def __exit__(self, etype, value, tb):
diff --git a/zuul/ansible/library/zuul_console.py b/zuul/ansible/library/zuul_console.py
index e70dac8..1932cf9 100644
--- a/zuul/ansible/library/zuul_console.py
+++ b/zuul/ansible/library/zuul_console.py
@@ -20,6 +20,9 @@
 import socket
 import threading
 
+LOG_STREAM_FILE = '/tmp/console.log'
+LOG_STREAM_PORT = 19885
+
 
 def daemonize():
     # A really basic daemonize method that should work well enough for
@@ -155,15 +158,15 @@
 
 
 def test():
-    s = Server('/tmp/console.html', 19885)
+    s = Server(LOG_STREAM_FILE, LOG_STREAM_PORT)
     s.run()
 
 
 def main():
     module = AnsibleModule(
         argument_spec=dict(
-            path=dict(default='/tmp/console.html'),
-            port=dict(default=19885, type='int'),
+            path=dict(default=LOG_STREAM_FILE),
+            port=dict(default=LOG_STREAM_PORT, type='int'),
         )
     )
 
diff --git a/zuul/ansible/library/zuul_log.py b/zuul/ansible/library/zuul_log.py
deleted file mode 100644
index 4b377d9..0000000
--- a/zuul/ansible/library/zuul_log.py
+++ /dev/null
@@ -1,58 +0,0 @@
-#!/usr/bin/python
-
-# Copyright (c) 2016 IBM Corp.
-# Copyright (c) 2016 Red Hat
-#
-# This module is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This software is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this software.  If not, see <http://www.gnu.org/licenses/>.
-
-import datetime
-
-
-class Console(object):
-    def __enter__(self):
-        self.logfile = open('/tmp/console.html', 'a', 0)
-        return self
-
-    def __exit__(self, etype, value, tb):
-        self.logfile.close()
-
-    def addLine(self, ln):
-        ts = datetime.datetime.now()
-        outln = '%s | %s' % (str(ts), ln)
-        self.logfile.write(outln)
-
-
-def log(msg):
-    if not isinstance(msg, list):
-        msg = [msg]
-    with Console() as console:
-        for line in msg:
-            console.addLine("[Zuul] %s\n" % line)
-
-
-def main():
-    module = AnsibleModule(
-        argument_spec=dict(
-            msg=dict(required=True, type='raw'),
-        )
-    )
-
-    p = module.params
-    log(p['msg'])
-    module.exit_json(changed=True)
-
-from ansible.module_utils.basic import *  # noqa
-
-if __name__ == '__main__':
-    main()
diff --git a/zuul/ansible/paths.py b/zuul/ansible/paths.py
new file mode 100644
index 0000000..e387732
--- /dev/null
+++ b/zuul/ansible/paths.py
@@ -0,0 +1,53 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# This module is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This software is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this software.  If not, see <http://www.gnu.org/licenses/>.
+
+import imp
+import os
+
+import ansible.plugins.action
+
+
+def _is_safe_path(path):
+    full_path = os.path.realpath(os.path.abspath(os.path.expanduser(path)))
+    if not full_path.startswith(os.path.abspath(os.path.curdir)):
+        return False
+    return True
+
+
+def _fail_dict(path, prefix='Accessing files from'):
+    return dict(
+        failed=True,
+        path=path,
+        msg="{prefix} outside the working dir {curdir} is prohibited".format(
+            prefix=prefix,
+            curdir=os.path.abspath(os.path.curdir)))
+
+
+def _import_ansible_action_plugin(name):
+    # Ansible forces the import of our action plugins
+    # (zuul.ansible.action.foo) as ansible.plugins.action.foo, which
+    # is the import path of the ansible implementation.  Our
+    # implementations need to subclass that, but if we try to import
+    # it with that name, we will get our own module.  This bypasses
+    # Python's module namespace to load the actual ansible modules.
+    # We need to give it a name, however.  If we load it with its
+    # actual name, we will end up overwriting our module in Python's
+    # namespace, causing infinite recursion.  So we supply an
+    # otherwise unused name for the module:
+    # zuul.ansible.protected.action.foo.
+
+    return imp.load_module(
+        'zuul.ansible.protected.action.' + name,
+        *imp.find_module(name, ansible.plugins.action.__path__))
diff --git a/zuul/change_matcher.py b/zuul/change_matcher.py
index ca2d93f..845ba1c 100644
--- a/zuul/change_matcher.py
+++ b/zuul/change_matcher.py
@@ -35,9 +35,15 @@
     def copy(self):
         return self.__class__(self._regex)
 
+    def __deepcopy__(self, memo):
+        return self.copy()
+
     def __eq__(self, other):
         return str(self) == str(other)
 
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
     def __str__(self):
         return '{%s:%s}' % (self.__class__.__name__, self._regex)
 
diff --git a/zuul/cmd/__init__.py b/zuul/cmd/__init__.py
index 5ffd431..9fa4c03 100644
--- a/zuul/cmd/__init__.py
+++ b/zuul/cmd/__init__.py
@@ -24,6 +24,7 @@
 import sys
 import traceback
 
+import yaml
 yappi = extras.try_import('yappi')
 
 import zuul.lib.connections
@@ -86,10 +87,17 @@
             if not os.path.exists(fp):
                 raise Exception("Unable to read logging config file at %s" %
                                 fp)
-            logging.config.fileConfig(fp)
+
+            if os.path.splitext(fp)[1] in ('.yml', '.yaml'):
+                with open(fp, 'r') as f:
+                    logging.config.dictConfig(yaml.safe_load(f))
+
+            else:
+                logging.config.fileConfig(fp)
+
         else:
             logging.basicConfig(level=logging.DEBUG)
 
     def configure_connections(self):
-        self.connections = zuul.lib.connections.configure_connections(
-            self.config)
+        self.connections = zuul.lib.connections.ConnectionRegistry()
+        self.connections.configure(self.config)
diff --git a/zuul/cmd/client.py b/zuul/cmd/client.py
index 1ce2828..cc8edaa 100644
--- a/zuul/cmd/client.py
+++ b/zuul/cmd/client.py
@@ -46,6 +46,8 @@
                                            help='additional help')
 
         cmd_enqueue = subparsers.add_parser('enqueue', help='enqueue a change')
+        cmd_enqueue.add_argument('--tenant', help='tenant name',
+                                 required=True)
         cmd_enqueue.add_argument('--trigger', help='trigger name',
                                  required=True)
         cmd_enqueue.add_argument('--pipeline', help='pipeline name',
@@ -58,6 +60,8 @@
 
         cmd_enqueue = subparsers.add_parser('enqueue-ref',
                                             help='enqueue a ref')
+        cmd_enqueue.add_argument('--tenant', help='tenant name',
+                                 required=True)
         cmd_enqueue.add_argument('--trigger', help='trigger name',
                                  required=True)
         cmd_enqueue.add_argument('--pipeline', help='pipeline name',
@@ -76,6 +80,8 @@
 
         cmd_promote = subparsers.add_parser('promote',
                                             help='promote one or more changes')
+        cmd_promote.add_argument('--tenant', help='tenant name',
+                                 required=True)
         cmd_promote.add_argument('--pipeline', help='pipeline name',
                                  required=True)
         cmd_promote.add_argument('--changes', help='change ids',
@@ -127,7 +133,8 @@
 
     def enqueue(self):
         client = zuul.rpcclient.RPCClient(self.server, self.port)
-        r = client.enqueue(pipeline=self.args.pipeline,
+        r = client.enqueue(tenant=self.args.tenant,
+                           pipeline=self.args.pipeline,
                            project=self.args.project,
                            trigger=self.args.trigger,
                            change=self.args.change)
@@ -135,7 +142,8 @@
 
     def enqueue_ref(self):
         client = zuul.rpcclient.RPCClient(self.server, self.port)
-        r = client.enqueue_ref(pipeline=self.args.pipeline,
+        r = client.enqueue_ref(tenant=self.args.tenant,
+                               pipeline=self.args.pipeline,
                                project=self.args.project,
                                trigger=self.args.trigger,
                                ref=self.args.ref,
@@ -145,7 +153,8 @@
 
     def promote(self):
         client = zuul.rpcclient.RPCClient(self.server, self.port)
-        r = client.promote(pipeline=self.args.pipeline,
+        r = client.promote(tenant=self.args.tenant,
+                           pipeline=self.args.pipeline,
                            change_ids=self.args.changes)
         return r
 
diff --git a/zuul/cmd/launcher.py b/zuul/cmd/launcher.py
index 49643ae..596fd1a 100644
--- a/zuul/cmd/launcher.py
+++ b/zuul/cmd/launcher.py
@@ -29,7 +29,7 @@
 import signal
 
 import zuul.cmd
-import zuul.launcher.ansiblelaunchserver
+import zuul.launcher.server
 
 # No zuul imports that pull in paramiko here; it must not be
 # imported until after the daemonization.
@@ -52,7 +52,7 @@
                             action='store_true',
                             help='keep local jobdirs after run completes')
         parser.add_argument('command',
-                            choices=zuul.launcher.ansiblelaunchserver.COMMANDS,
+                            choices=zuul.launcher.server.COMMANDS,
                             nargs='?')
 
         self.args = parser.parse_args()
@@ -79,8 +79,8 @@
 
         self.log = logging.getLogger("zuul.Launcher")
 
-        LaunchServer = zuul.launcher.ansiblelaunchserver.LaunchServer
-        self.launcher = LaunchServer(self.config,
+        LaunchServer = zuul.launcher.server.LaunchServer
+        self.launcher = LaunchServer(self.config, self.connections,
                                      keep_jobdir=self.args.keep_jobdir)
         self.launcher.start()
 
@@ -102,7 +102,7 @@
     server.parse_arguments()
     server.read_config()
 
-    if server.args.command in zuul.launcher.ansiblelaunchserver.COMMANDS:
+    if server.args.command in zuul.launcher.server.COMMANDS:
         server.send_command(server.args.command)
         sys.exit(0)
 
diff --git a/zuul/cmd/server.py b/zuul/cmd/scheduler.py
similarity index 71%
rename from zuul/cmd/server.py
rename to zuul/cmd/scheduler.py
index 0b7538d..9a8b24f 100755
--- a/zuul/cmd/server.py
+++ b/zuul/cmd/scheduler.py
@@ -35,24 +35,20 @@
 # Similar situation with gear and statsd.
 
 
-class Server(zuul.cmd.ZuulApp):
+class Scheduler(zuul.cmd.ZuulApp):
     def __init__(self):
-        super(Server, self).__init__()
+        super(Scheduler, self).__init__()
         self.gear_server_pid = None
 
     def parse_arguments(self):
         parser = argparse.ArgumentParser(description='Project gating system.')
         parser.add_argument('-c', dest='config',
                             help='specify the config file')
-        parser.add_argument('-l', dest='layout',
-                            help='specify the layout file')
         parser.add_argument('-d', dest='nodaemon', action='store_true',
                             help='do not run as a daemon')
-        parser.add_argument('-t', dest='validate', nargs='?', const=True,
-                            metavar='JOB_LIST',
-                            help='validate layout file syntax (optionally '
-                            'providing the path to a file with a list of '
-                            'available job names)')
+        parser.add_argument('-t', dest='validate', action='store_true',
+                            help='validate config file syntax (Does not'
+                            'validate config repo validity)')
         parser.add_argument('--version', dest='version', action='version',
                             version=self._get_version(),
                             help='show zuul version')
@@ -79,38 +75,19 @@
         self.stop_gear_server()
         os._exit(0)
 
-    def test_config(self, job_list_path):
+    def test_config(self):
         # See comment at top of file about zuul imports
         import zuul.scheduler
-        import zuul.launcher.gearman
-        import zuul.trigger.gerrit
+        import zuul.launcher.client
 
         logging.basicConfig(level=logging.DEBUG)
-        self.sched = zuul.scheduler.Scheduler(self.config,
-                                              testonly=True)
-        self.configure_connections()
-        self.sched.registerConnections(self.connections, load=False)
-        layout = self.sched.testConfig(self.config.get('zuul',
-                                                       'layout_config'),
-                                       self.connections)
-        if not job_list_path:
-            return False
-
-        failure = False
-        path = os.path.expanduser(job_list_path)
-        if not os.path.exists(path):
-            raise Exception("Unable to find job list: %s" % path)
-        jobs = set()
-        jobs.add('noop')
-        for line in open(path):
-            v = line.strip()
-            if v:
-                jobs.add(v)
-        for job in sorted(layout.jobs):
-            if job not in jobs:
-                print("FAILURE: Job %s not defined" % job)
-                failure = True
-        return failure
+        try:
+            self.sched = zuul.scheduler.Scheduler(self.config,
+                                                  testonly=True)
+        except Exception as e:
+            self.log.error("%s" % e)
+            return -1
+        return 0
 
     def start_gear_server(self):
         pipe_read, pipe_write = os.pipe()
@@ -147,11 +124,13 @@
     def main(self):
         # See comment at top of file about zuul imports
         import zuul.scheduler
-        import zuul.launcher.gearman
+        import zuul.launcher.client
         import zuul.merger.client
+        import zuul.nodepool
         import zuul.lib.swift
         import zuul.webapp
         import zuul.rpclistener
+        import zuul.zk
 
         signal.signal(signal.SIGUSR2, zuul.cmd.stack_dump_handler)
         if (self.config.has_option('gearman_server', 'start') and
@@ -159,15 +138,24 @@
             self.start_gear_server()
 
         self.setup_logging('zuul', 'log_config')
-        self.log = logging.getLogger("zuul.Server")
+        self.log = logging.getLogger("zuul.Scheduler")
 
         self.sched = zuul.scheduler.Scheduler(self.config)
         # TODO(jhesketh): Move swift into a connection?
         self.swift = zuul.lib.swift.Swift(self.config)
 
-        gearman = zuul.launcher.gearman.Gearman(self.config, self.sched,
-                                                self.swift)
+        gearman = zuul.launcher.client.LaunchClient(self.config, self.sched,
+                                                    self.swift)
         merger = zuul.merger.client.MergeClient(self.config, self.sched)
+        nodepool = zuul.nodepool.Nodepool(self.sched)
+
+        zookeeper = zuul.zk.ZooKeeper()
+        if self.config.has_option('zuul', 'zookeeper_hosts'):
+            zookeeper_hosts = self.config.get('zuul', 'zookeeper_hosts')
+        else:
+            zookeeper_hosts = '127.0.0.1:2181'
+
+        zookeeper.connect(zookeeper_hosts)
 
         if self.config.has_option('zuul', 'status_expiry'):
             cache_expiry = self.config.getint('zuul', 'status_expiry')
@@ -192,12 +180,20 @@
         self.configure_connections()
         self.sched.setLauncher(gearman)
         self.sched.setMerger(merger)
+        self.sched.setNodepool(nodepool)
+        self.sched.setZooKeeper(zookeeper)
 
         self.log.info('Starting scheduler')
-        self.sched.start()
-        self.sched.registerConnections(self.connections)
-        self.sched.reconfigure(self.config)
-        self.sched.resume()
+        try:
+            self.sched.start()
+            self.sched.registerConnections(self.connections)
+            self.sched.reconfigure(self.config)
+            self.sched.resume()
+        except Exception:
+            self.log.exception("Error starting Zuul:")
+            # TODO(jeblair): If we had all threads marked as daemon,
+            # we might be able to have a nicer way of exiting here.
+            sys.exit(1)
         self.log.info('Starting Webapp')
         webapp.start()
         self.log.info('Starting RPC')
@@ -215,31 +211,25 @@
 
 
 def main():
-    server = Server()
-    server.parse_arguments()
+    scheduler = Scheduler()
+    scheduler.parse_arguments()
 
-    server.read_config()
+    scheduler.read_config()
 
-    if server.args.layout:
-        server.config.set('zuul', 'layout_config', server.args.layout)
+    if scheduler.args.validate:
+        sys.exit(scheduler.test_config())
 
-    if server.args.validate:
-        path = server.args.validate
-        if path is True:
-            path = None
-        sys.exit(server.test_config(path))
-
-    if server.config.has_option('zuul', 'pidfile'):
-        pid_fn = os.path.expanduser(server.config.get('zuul', 'pidfile'))
+    if scheduler.config.has_option('zuul', 'pidfile'):
+        pid_fn = os.path.expanduser(scheduler.config.get('zuul', 'pidfile'))
     else:
-        pid_fn = '/var/run/zuul/zuul.pid'
+        pid_fn = '/var/run/zuul-scheduler/zuul-scheduler.pid'
     pid = pid_file_module.TimeoutPIDLockFile(pid_fn, 10)
 
-    if server.args.nodaemon:
-        server.main()
+    if scheduler.args.nodaemon:
+        scheduler.main()
     else:
         with daemon.DaemonContext(pidfile=pid):
-            server.main()
+            scheduler.main()
 
 
 if __name__ == "__main__":
diff --git a/zuul/configloader.py b/zuul/configloader.py
new file mode 100644
index 0000000..42616a8
--- /dev/null
+++ b/zuul/configloader.py
@@ -0,0 +1,873 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from contextlib import contextmanager
+import copy
+import os
+import logging
+import six
+import yaml
+import pprint
+
+import voluptuous as vs
+
+from zuul import model
+import zuul.manager.dependent
+import zuul.manager.independent
+from zuul import change_matcher
+
+
+# Several forms accept either a single item or a list, this makes
+# specifying that in the schema easy (and explicit).
+def to_list(x):
+    return vs.Any([x], x)
+
+
+def as_list(item):
+    if not item:
+        return []
+    if isinstance(item, list):
+        return item
+    return [item]
+
+
+class ConfigurationSyntaxError(Exception):
+    pass
+
+
+@contextmanager
+def configuration_exceptions(stanza, conf):
+    try:
+        yield
+    except vs.Invalid as e:
+        conf = copy.deepcopy(conf)
+        context = conf.pop('_source_context')
+        m = """
+Zuul encountered a syntax error while parsing its configuration in the
+repo {repo} on branch {branch}.  The error was:
+
+  {error}
+
+The offending content was a {stanza} stanza with the content:
+
+{content}
+"""
+        m = m.format(repo=context.project.name,
+                     branch=context.branch,
+                     error=str(e),
+                     stanza=stanza,
+                     content=pprint.pformat(conf))
+        raise ConfigurationSyntaxError(m)
+
+
+class NodeSetParser(object):
+    @staticmethod
+    def getSchema():
+        node = {vs.Required('name'): str,
+                vs.Required('image'): str,
+                }
+
+        nodeset = {vs.Required('name'): str,
+                   vs.Required('nodes'): [node],
+                   '_source_context': model.SourceContext,
+                   }
+
+        return vs.Schema(nodeset)
+
+    @staticmethod
+    def fromYaml(layout, conf):
+        with configuration_exceptions('nodeset', conf):
+            NodeSetParser.getSchema()(conf)
+        ns = model.NodeSet(conf['name'])
+        for conf_node in as_list(conf['nodes']):
+            node = model.Node(conf_node['name'], conf_node['image'])
+            ns.addNode(node)
+        return ns
+
+
+class JobParser(object):
+    @staticmethod
+    def getSchema():
+        swift_tmpurl = {vs.Required('name'): str,
+                        'container': str,
+                        'expiry': int,
+                        'max_file_size': int,
+                        'max-file-size': int,
+                        'max_file_count': int,
+                        'max-file-count': int,
+                        'logserver_prefix': str,
+                        'logserver-prefix': str,
+                        }
+
+        auth = {'secrets': to_list(str),
+                'inherit': bool,
+                'swift-tmpurl': to_list(swift_tmpurl),
+                }
+
+        node = {vs.Required('name'): str,
+                vs.Required('image'): str,
+                }
+
+        zuul_role = {vs.Required('zuul'): str,
+                     'name': str}
+
+        galaxy_role = {vs.Required('galaxy'): str,
+                       'name': str}
+
+        role = vs.Any(zuul_role, galaxy_role)
+
+        job = {vs.Required('name'): str,
+               'parent': str,
+               'queue-name': str,
+               'failure-message': str,
+               'success-message': str,
+               'failure-url': str,
+               'success-url': str,
+               'hold-following-changes': bool,
+               'voting': bool,
+               'mutex': str,
+               'tags': to_list(str),
+               'branches': to_list(str),
+               'files': to_list(str),
+               'auth': to_list(auth),
+               'irrelevant-files': to_list(str),
+               'nodes': vs.Any([node], str),
+               'timeout': int,
+               'attempts': int,
+               'pre-run': to_list(str),
+               'post-run': to_list(str),
+               'run': str,
+               '_source_context': model.SourceContext,
+               'roles': to_list(role),
+               'repos': to_list(str),
+               'vars': dict,
+               }
+
+        return vs.Schema(job)
+
+    simple_attributes = [
+        'timeout',
+        'workspace',
+        'voting',
+        'hold-following-changes',
+        'mutex',
+        'attempts',
+        'failure-message',
+        'success-message',
+        'failure-url',
+        'success-url',
+    ]
+
+    @staticmethod
+    def fromYaml(tenant, layout, conf):
+        with configuration_exceptions('job', conf):
+            JobParser.getSchema()(conf)
+
+        # NB: The default detection system in the Job class requires
+        # that we always assign values directly rather than modifying
+        # them (e.g., "job.run = ..." rather than
+        # "job.run.append(...)").
+
+        job = model.Job(conf['name'])
+        job.source_context = conf.get('_source_context')
+        if 'auth' in conf:
+            job.auth = conf.get('auth')
+
+        if 'parent' in conf:
+            parent = layout.getJob(conf['parent'])
+            job.inheritFrom(parent)
+
+        for pre_run_name in as_list(conf.get('pre-run')):
+            full_pre_run_name = os.path.join('playbooks', pre_run_name)
+            pre_run = model.PlaybookContext(job.source_context,
+                                            full_pre_run_name)
+            job.pre_run = job.pre_run + (pre_run,)
+        for post_run_name in as_list(conf.get('post-run')):
+            full_post_run_name = os.path.join('playbooks', post_run_name)
+            post_run = model.PlaybookContext(job.source_context,
+                                             full_post_run_name)
+            job.post_run = (post_run,) + job.post_run
+        if 'run' in conf:
+            run_name = os.path.join('playbooks', conf['run'])
+            run = model.PlaybookContext(job.source_context, run_name)
+            job.run = (run,)
+        else:
+            run_name = os.path.join('playbooks', job.name)
+            run = model.PlaybookContext(job.source_context, run_name)
+            job.implied_run = (run,) + job.implied_run
+
+        for k in JobParser.simple_attributes:
+            a = k.replace('-', '_')
+            if k in conf:
+                setattr(job, a, conf[k])
+        if 'nodes' in conf:
+            conf_nodes = conf['nodes']
+            if isinstance(conf_nodes, six.string_types):
+                # This references an existing named nodeset in the layout.
+                ns = layout.nodesets[conf_nodes]
+            else:
+                ns = model.NodeSet()
+                for conf_node in conf_nodes:
+                    node = model.Node(conf_node['name'], conf_node['image'])
+                    ns.addNode(node)
+            job.nodeset = ns
+
+        if 'repos' in conf:
+            # Accumulate repos in a set so that job inheritance
+            # is additive.
+            job.repos = job.repos.union(set(conf.get('repos', [])))
+
+        tags = conf.get('tags')
+        if tags:
+            # Tags are merged via a union rather than a
+            # destructive copy because they are intended to
+            # accumulate onto any previously applied tags.
+            job.tags = job.tags.union(set(tags))
+
+        roles = []
+        for role in conf.get('roles', []):
+            if 'zuul' in role:
+                r = JobParser._makeZuulRole(tenant, job, role)
+                if r:
+                    roles.append(r)
+        job.roles = job.roles.union(set(roles))
+
+        variables = conf.get('vars', None)
+        if variables:
+            job.updateVariables(variables)
+
+        # If the definition for this job came from a project repo,
+        # implicitly apply a branch matcher for the branch it was on.
+        if (not job.source_context.trusted):
+            branches = [job.source_context.branch]
+        elif 'branches' in conf:
+            branches = as_list(conf['branches'])
+        else:
+            branches = None
+        if branches:
+            matchers = []
+            for branch in branches:
+                matchers.append(change_matcher.BranchMatcher(branch))
+            job.branch_matcher = change_matcher.MatchAny(matchers)
+        if 'files' in conf:
+            matchers = []
+            for fn in as_list(conf['files']):
+                matchers.append(change_matcher.FileMatcher(fn))
+            job.file_matcher = change_matcher.MatchAny(matchers)
+        if 'irrelevant-files' in conf:
+            matchers = []
+            for fn in as_list(conf['irrelevant-files']):
+                matchers.append(change_matcher.FileMatcher(fn))
+            job.irrelevant_file_matcher = change_matcher.MatchAllFiles(
+                matchers)
+        return job
+
+    @staticmethod
+    def _makeZuulRole(tenant, job, role):
+        name = role['zuul'].split('/')[-1]
+
+        # TODOv3(jeblair): this limits roles to the same
+        # source; we should remove that limitation.
+        source = job.source_context.project.connection_name
+        (trusted, project) = tenant.getRepo(source, role['zuul'])
+        if project is None:
+            return None
+
+        return model.ZuulRole(role.get('name', name), source,
+                              project.name, trusted)
+
+
+class ProjectTemplateParser(object):
+    log = logging.getLogger("zuul.ProjectTemplateParser")
+
+    @staticmethod
+    def getSchema(layout):
+        project_template = {
+            vs.Required('name'): str,
+            'merge-mode': vs.Any(
+                'merge', 'merge-resolve',
+                'cherry-pick'),
+            '_source_context': model.SourceContext,
+        }
+
+        for p in layout.pipelines.values():
+            project_template[p.name] = {'queue': str,
+                                        'jobs': [vs.Any(str, dict)]}
+        return vs.Schema(project_template)
+
+    @staticmethod
+    def fromYaml(tenant, layout, conf):
+        with configuration_exceptions('project or project-template', conf):
+            ProjectTemplateParser.getSchema(layout)(conf)
+        # Make a copy since we modify this later via pop
+        conf = copy.deepcopy(conf)
+        project_template = model.ProjectConfig(conf['name'])
+        source_context = conf['_source_context']
+        for pipeline in layout.pipelines.values():
+            conf_pipeline = conf.get(pipeline.name)
+            if not conf_pipeline:
+                continue
+            project_pipeline = model.ProjectPipelineConfig()
+            project_template.pipelines[pipeline.name] = project_pipeline
+            project_pipeline.queue_name = conf_pipeline.get('queue')
+            project_pipeline.job_tree = ProjectTemplateParser._parseJobTree(
+                tenant, layout, conf_pipeline.get('jobs', []),
+                source_context)
+        return project_template
+
+    @staticmethod
+    def _parseJobTree(tenant, layout, conf, source_context, tree=None):
+        if not tree:
+            tree = model.JobTree(None)
+        for conf_job in conf:
+            if isinstance(conf_job, six.string_types):
+                job = model.Job(conf_job)
+                tree.addJob(job)
+            elif isinstance(conf_job, dict):
+                # A dictionary in a job tree may override params, or
+                # be the root of a sub job tree, or both.
+                jobname, attrs = conf_job.items()[0]
+                jobs = attrs.pop('jobs', None)
+                if attrs:
+                    # We are overriding params, so make a new job def
+                    attrs['name'] = jobname
+                    attrs['_source_context'] = source_context
+                    subtree = tree.addJob(JobParser.fromYaml(
+                        tenant, layout, attrs))
+                else:
+                    # Not overriding, so add a blank job
+                    job = model.Job(jobname)
+                    subtree = tree.addJob(job)
+
+                if jobs:
+                    # This is the root of a sub tree
+                    ProjectTemplateParser._parseJobTree(
+                        tenant, layout, jobs, source_context, subtree)
+            else:
+                raise Exception("Job must be a string or dictionary")
+        return tree
+
+
+class ProjectParser(object):
+    log = logging.getLogger("zuul.ProjectParser")
+
+    @staticmethod
+    def getSchema(layout):
+        project = {
+            vs.Required('name'): str,
+            'templates': [str],
+            'merge-mode': vs.Any('merge', 'merge-resolve',
+                                 'cherry-pick'),
+            '_source_context': model.SourceContext,
+        }
+
+        for p in layout.pipelines.values():
+            project[p.name] = {'queue': str,
+                               'jobs': [vs.Any(str, dict)]}
+        return vs.Schema(project)
+
+    @staticmethod
+    def fromYaml(tenant, layout, conf_list):
+        for conf in conf_list:
+            with configuration_exceptions('project', conf):
+                ProjectParser.getSchema(layout)(conf)
+        project = model.ProjectConfig(conf_list[0]['name'])
+        mode = conf_list[0].get('merge-mode', 'merge-resolve')
+        project.merge_mode = model.MERGER_MAP[mode]
+
+        # TODOv3(jeblair): deal with merge mode setting on multi branches
+        configs = []
+        for conf in conf_list:
+            # Make a copy since we modify this later via pop
+            conf = copy.deepcopy(conf)
+            conf_templates = conf.pop('templates', [])
+            # The way we construct a project definition is by parsing the
+            # definition as a template, then applying all of the
+            # templates, including the newly parsed one, in order.
+            project_template = ProjectTemplateParser.fromYaml(
+                tenant, layout, conf)
+            configs.extend([layout.project_templates[name]
+                            for name in conf_templates])
+            configs.append(project_template)
+        for pipeline in layout.pipelines.values():
+            project_pipeline = model.ProjectPipelineConfig()
+            project_pipeline.job_tree = model.JobTree(None)
+            queue_name = None
+            # For every template, iterate over the job tree and replace or
+            # create the jobs in the final definition as needed.
+            pipeline_defined = False
+            for template in configs:
+                if pipeline.name in template.pipelines:
+                    ProjectParser.log.debug(
+                        "Applying template %s to pipeline %s" %
+                        (template.name, pipeline.name))
+                    pipeline_defined = True
+                    template_pipeline = template.pipelines[pipeline.name]
+                    project_pipeline.job_tree.inheritFrom(
+                        template_pipeline.job_tree)
+                    if template_pipeline.queue_name:
+                        queue_name = template_pipeline.queue_name
+            if queue_name:
+                project_pipeline.queue_name = queue_name
+            if pipeline_defined:
+                project.pipelines[pipeline.name] = project_pipeline
+        return project
+
+
+class PipelineParser(object):
+    log = logging.getLogger("zuul.PipelineParser")
+
+    # A set of reporter configuration keys to action mapping
+    reporter_actions = {
+        'start': 'start_actions',
+        'success': 'success_actions',
+        'failure': 'failure_actions',
+        'merge-failure': 'merge_failure_actions',
+        'disabled': 'disabled_actions',
+    }
+
+    @staticmethod
+    def getDriverSchema(dtype, connections):
+        methods = {
+            'trigger': 'getTriggerSchema',
+            'reporter': 'getReporterSchema',
+        }
+
+        schema = {}
+        # Add the configured connections as available layout options
+        for connection_name, connection in connections.connections.items():
+            method = getattr(connection.driver, methods[dtype], None)
+            if method:
+                schema[connection_name] = to_list(method())
+
+        return schema
+
+    @staticmethod
+    def getSchema(layout, connections):
+        manager = vs.Any('independent',
+                         'dependent')
+
+        precedence = vs.Any('normal', 'low', 'high')
+
+        approval = vs.Schema({'username': str,
+                              'email-filter': str,
+                              'email': str,
+                              'older-than': str,
+                              'newer-than': str,
+                              }, extra=True)
+
+        require = {'approval': to_list(approval),
+                   'open': bool,
+                   'current-patchset': bool,
+                   'status': to_list(str)}
+
+        reject = {'approval': to_list(approval)}
+
+        window = vs.All(int, vs.Range(min=0))
+        window_floor = vs.All(int, vs.Range(min=1))
+        window_type = vs.Any('linear', 'exponential')
+        window_factor = vs.All(int, vs.Range(min=1))
+
+        pipeline = {vs.Required('name'): str,
+                    vs.Required('manager'): manager,
+                    'source': str,
+                    'precedence': precedence,
+                    'description': str,
+                    'require': require,
+                    'reject': reject,
+                    'success-message': str,
+                    'failure-message': str,
+                    'merge-failure-message': str,
+                    'footer-message': str,
+                    'dequeue-on-new-patchset': bool,
+                    'ignore-dependencies': bool,
+                    'disable-after-consecutive-failures':
+                        vs.All(int, vs.Range(min=1)),
+                    'window': window,
+                    'window-floor': window_floor,
+                    'window-increase-type': window_type,
+                    'window-increase-factor': window_factor,
+                    'window-decrease-type': window_type,
+                    'window-decrease-factor': window_factor,
+                    '_source_context': model.SourceContext,
+                    }
+        pipeline['trigger'] = vs.Required(
+            PipelineParser.getDriverSchema('trigger', connections))
+        for action in ['start', 'success', 'failure', 'merge-failure',
+                       'disabled']:
+            pipeline[action] = PipelineParser.getDriverSchema('reporter',
+                                                              connections)
+        return vs.Schema(pipeline)
+
+    @staticmethod
+    def fromYaml(layout, connections, scheduler, conf):
+        with configuration_exceptions('pipeline', conf):
+            PipelineParser.getSchema(layout, connections)(conf)
+        pipeline = model.Pipeline(conf['name'], layout)
+        pipeline.description = conf.get('description')
+
+        pipeline.source = connections.getSource(conf['source'])
+
+        precedence = model.PRECEDENCE_MAP[conf.get('precedence')]
+        pipeline.precedence = precedence
+        pipeline.failure_message = conf.get('failure-message',
+                                            "Build failed.")
+        pipeline.merge_failure_message = conf.get(
+            'merge-failure-message', "Merge Failed.\n\nThis change or one "
+            "of its cross-repo dependencies was unable to be "
+            "automatically merged with the current state of its "
+            "repository. Please rebase the change and upload a new "
+            "patchset.")
+        pipeline.success_message = conf.get('success-message',
+                                            "Build succeeded.")
+        pipeline.footer_message = conf.get('footer-message', "")
+        pipeline.start_message = conf.get('start-message',
+                                          "Starting {pipeline.name} jobs.")
+        pipeline.dequeue_on_new_patchset = conf.get(
+            'dequeue-on-new-patchset', True)
+        pipeline.ignore_dependencies = conf.get(
+            'ignore-dependencies', False)
+
+        for conf_key, action in PipelineParser.reporter_actions.items():
+            reporter_set = []
+            if conf.get(conf_key):
+                for reporter_name, params \
+                    in conf.get(conf_key).items():
+                    reporter = connections.getReporter(reporter_name,
+                                                       params)
+                    reporter.setAction(conf_key)
+                    reporter_set.append(reporter)
+            setattr(pipeline, action, reporter_set)
+
+        # If merge-failure actions aren't explicit, use the failure actions
+        if not pipeline.merge_failure_actions:
+            pipeline.merge_failure_actions = pipeline.failure_actions
+
+        pipeline.disable_at = conf.get(
+            'disable-after-consecutive-failures', None)
+
+        pipeline.window = conf.get('window', 20)
+        pipeline.window_floor = conf.get('window-floor', 3)
+        pipeline.window_increase_type = conf.get(
+            'window-increase-type', 'linear')
+        pipeline.window_increase_factor = conf.get(
+            'window-increase-factor', 1)
+        pipeline.window_decrease_type = conf.get(
+            'window-decrease-type', 'exponential')
+        pipeline.window_decrease_factor = conf.get(
+            'window-decrease-factor', 2)
+
+        manager_name = conf['manager']
+        if manager_name == 'dependent':
+            manager = zuul.manager.dependent.DependentPipelineManager(
+                scheduler, pipeline)
+        elif manager_name == 'independent':
+            manager = zuul.manager.independent.IndependentPipelineManager(
+                scheduler, pipeline)
+
+        pipeline.setManager(manager)
+        layout.pipelines[conf['name']] = pipeline
+
+        if 'require' in conf or 'reject' in conf:
+            require = conf.get('require', {})
+            reject = conf.get('reject', {})
+            f = model.ChangeishFilter(
+                open=require.get('open'),
+                current_patchset=require.get('current-patchset'),
+                statuses=as_list(require.get('status')),
+                required_approvals=as_list(require.get('approval')),
+                reject_approvals=as_list(reject.get('approval'))
+            )
+            manager.changeish_filters.append(f)
+
+        for trigger_name, trigger_config\
+            in conf.get('trigger').items():
+            trigger = connections.getTrigger(trigger_name, trigger_config)
+            pipeline.triggers.append(trigger)
+
+            # TODO: move
+            manager.event_filters += trigger.getEventFilters(
+                conf['trigger'][trigger_name])
+
+        return pipeline
+
+
+class TenantParser(object):
+    log = logging.getLogger("zuul.TenantParser")
+
+    tenant_source = vs.Schema({'config-repos': [str],
+                               'project-repos': [str]})
+
+    @staticmethod
+    def validateTenantSources(connections):
+        def v(value, path=[]):
+            if isinstance(value, dict):
+                for k, val in value.items():
+                    connections.getSource(k)
+                    TenantParser.validateTenantSource(val, path + [k])
+            else:
+                raise vs.Invalid("Invalid tenant source", path)
+        return v
+
+    @staticmethod
+    def validateTenantSource(value, path=[]):
+        TenantParser.tenant_source(value)
+
+    @staticmethod
+    def getSchema(connections=None):
+        tenant = {vs.Required('name'): str,
+                  'source': TenantParser.validateTenantSources(connections)}
+        return vs.Schema(tenant)
+
+    @staticmethod
+    def fromYaml(base, connections, scheduler, merger, conf, cached):
+        TenantParser.getSchema(connections)(conf)
+        tenant = model.Tenant(conf['name'])
+        tenant.unparsed_config = conf
+        unparsed_config = model.UnparsedTenantConfig()
+        tenant.config_repos, tenant.project_repos = \
+            TenantParser._loadTenantConfigRepos(connections, conf)
+        for source, repo in tenant.config_repos:
+            tenant.addConfigRepo(source, repo)
+        for source, repo in tenant.project_repos:
+            tenant.addProjectRepo(source, repo)
+        tenant.config_repos_config, tenant.project_repos_config = \
+            TenantParser._loadTenantInRepoLayouts(merger, connections,
+                                                  tenant.config_repos,
+                                                  tenant.project_repos,
+                                                  cached)
+        unparsed_config.extend(tenant.config_repos_config)
+        unparsed_config.extend(tenant.project_repos_config)
+        tenant.layout = TenantParser._parseLayout(base, tenant,
+                                                  unparsed_config,
+                                                  scheduler,
+                                                  connections)
+        tenant.layout.tenant = tenant
+        return tenant
+
+    @staticmethod
+    def _loadTenantConfigRepos(connections, conf_tenant):
+        config_repos = []
+        project_repos = []
+
+        for source_name, conf_source in conf_tenant.get('source', {}).items():
+            source = connections.getSource(source_name)
+
+            for conf_repo in conf_source.get('config-repos', []):
+                project = source.getProject(conf_repo)
+                config_repos.append((source, project))
+
+            for conf_repo in conf_source.get('project-repos', []):
+                project = source.getProject(conf_repo)
+                project_repos.append((source, project))
+
+        return config_repos, project_repos
+
+    @staticmethod
+    def _loadTenantInRepoLayouts(merger, connections, config_repos,
+                                 project_repos, cached):
+        config_repos_config = model.UnparsedTenantConfig()
+        project_repos_config = model.UnparsedTenantConfig()
+        jobs = []
+
+        for (source, project) in config_repos:
+            # If we have cached data (this is a reconfiguration) use it.
+            if cached and project.unparsed_config:
+                TenantParser.log.info(
+                    "Loading previously parsed configuration from %s" %
+                    (project,))
+                config_repos_config.extend(project.unparsed_config)
+                continue
+            # Otherwise, prepare an empty unparsed config object to
+            # hold cached data later.
+            project.unparsed_config = model.UnparsedTenantConfig()
+            # Get main config files.  These files are permitted the
+            # full range of configuration.
+            url = source.getGitUrl(project)
+            job = merger.getFiles(project.name, url, 'master',
+                                  files=['zuul.yaml', '.zuul.yaml'])
+            job.source_context = model.SourceContext(project, 'master', True)
+            jobs.append(job)
+
+        for (source, project) in project_repos:
+            # If we have cached data (this is a reconfiguration) use it.
+            if cached and project.unparsed_config:
+                TenantParser.log.info(
+                    "Loading previously parsed configuration from %s" %
+                    (project,))
+                project_repos_config.extend(project.unparsed_config)
+                continue
+            # Otherwise, prepare an empty unparsed config object to
+            # hold cached data later.
+            project.unparsed_config = model.UnparsedTenantConfig()
+            # Get in-project-repo config files which have a restricted
+            # set of options.
+            url = source.getGitUrl(project)
+            # For each branch in the repo, get the zuul.yaml for that
+            # branch.  Remember the branch and then implicitly add a
+            # branch selector to each job there.  This makes the
+            # in-repo configuration apply only to that branch.
+            for branch in source.getProjectBranches(project):
+                project.unparsed_branch_config[branch] = \
+                    model.UnparsedTenantConfig()
+                job = merger.getFiles(project.name, url, branch,
+                                      files=['.zuul.yaml'])
+                job.source_context = model.SourceContext(project,
+                                                         branch, False)
+                jobs.append(job)
+
+        for job in jobs:
+            # Note: this is an ordered list -- we wait for cat jobs to
+            # complete in the order they were launched which is the
+            # same order they were defined in the main config file.
+            # This is important for correct inheritance.
+            TenantParser.log.debug("Waiting for cat job %s" % (job,))
+            job.wait()
+            for fn in ['zuul.yaml', '.zuul.yaml']:
+                if job.files.get(fn):
+                    TenantParser.log.info(
+                        "Loading configuration from %s/%s" %
+                        (job.source_context, fn))
+                    project = job.source_context.project
+                    branch = job.source_context.branch
+                    if job.source_context.trusted:
+                        incdata = TenantParser._parseConfigRepoLayout(
+                            job.files[fn], job.source_context)
+                        config_repos_config.extend(incdata)
+                    else:
+                        incdata = TenantParser._parseProjectRepoLayout(
+                            job.files[fn], job.source_context)
+                        project_repos_config.extend(incdata)
+                    project.unparsed_config.extend(incdata)
+                    if branch in project.unparsed_branch_config:
+                        project.unparsed_branch_config[branch].extend(incdata)
+        return config_repos_config, project_repos_config
+
+    @staticmethod
+    def _parseConfigRepoLayout(data, source_context):
+        # This is the top-level configuration for a tenant.
+        config = model.UnparsedTenantConfig()
+        config.extend(yaml.load(data), source_context)
+        return config
+
+    @staticmethod
+    def _parseProjectRepoLayout(data, source_context):
+        # TODOv3(jeblair): this should implement some rules to protect
+        # aspects of the config that should not be changed in-repo
+        config = model.UnparsedTenantConfig()
+        config.extend(yaml.load(data), source_context)
+
+        return config
+
+    @staticmethod
+    def _parseLayout(base, tenant, data, scheduler, connections):
+        layout = model.Layout()
+
+        for config_pipeline in data.pipelines:
+            layout.addPipeline(PipelineParser.fromYaml(layout, connections,
+                                                       scheduler,
+                                                       config_pipeline))
+
+        for config_nodeset in data.nodesets:
+            layout.addNodeSet(NodeSetParser.fromYaml(layout, config_nodeset))
+
+        for config_job in data.jobs:
+            layout.addJob(JobParser.fromYaml(tenant, layout, config_job))
+
+        for config_template in data.project_templates:
+            layout.addProjectTemplate(ProjectTemplateParser.fromYaml(
+                tenant, layout, config_template))
+
+        for config_project in data.projects.values():
+            layout.addProjectConfig(ProjectParser.fromYaml(
+                tenant, layout, config_project))
+
+        for pipeline in layout.pipelines.values():
+            pipeline.manager._postConfig(layout)
+
+        return layout
+
+
+class ConfigLoader(object):
+    log = logging.getLogger("zuul.ConfigLoader")
+
+    def expandConfigPath(self, config_path):
+        if config_path:
+            config_path = os.path.expanduser(config_path)
+        if not os.path.exists(config_path):
+            raise Exception("Unable to read tenant config file at %s" %
+                            config_path)
+        return config_path
+
+    def loadConfig(self, config_path, scheduler, merger, connections):
+        abide = model.Abide()
+
+        config_path = self.expandConfigPath(config_path)
+        with open(config_path) as config_file:
+            self.log.info("Loading configuration from %s" % (config_path,))
+            data = yaml.load(config_file)
+        config = model.UnparsedAbideConfig()
+        config.extend(data)
+        base = os.path.dirname(os.path.realpath(config_path))
+
+        for conf_tenant in config.tenants:
+            # When performing a full reload, do not use cached data.
+            tenant = TenantParser.fromYaml(base, connections, scheduler,
+                                           merger, conf_tenant, cached=False)
+            abide.tenants[tenant.name] = tenant
+        return abide
+
+    def reloadTenant(self, config_path, scheduler, merger, connections,
+                     abide, tenant):
+        new_abide = model.Abide()
+        new_abide.tenants = abide.tenants.copy()
+
+        config_path = self.expandConfigPath(config_path)
+        base = os.path.dirname(os.path.realpath(config_path))
+
+        # When reloading a tenant only, use cached data if available.
+        new_tenant = TenantParser.fromYaml(base, connections, scheduler,
+                                           merger, tenant.unparsed_config,
+                                           cached=True)
+        new_abide.tenants[tenant.name] = new_tenant
+        return new_abide
+
+    def createDynamicLayout(self, tenant, files):
+        config = tenant.config_repos_config.copy()
+        for source, project in tenant.project_repos:
+            for branch in source.getProjectBranches(project):
+                data = files.getFile(project.name, branch, '.zuul.yaml')
+                if data:
+                    source_context = model.SourceContext(project,
+                                                         branch, False)
+                    incdata = TenantParser._parseProjectRepoLayout(
+                        data, source_context)
+                else:
+                    incdata = project.unparsed_branch_config[branch]
+                if not incdata:
+                    continue
+                config.extend(incdata)
+        layout = model.Layout()
+        # TODOv3(jeblair): copying the pipelines could be dangerous/confusing.
+        layout.pipelines = tenant.layout.pipelines
+
+        for config_job in config.jobs:
+            layout.addJob(JobParser.fromYaml(tenant, layout, config_job))
+
+        for config_template in config.project_templates:
+            layout.addProjectTemplate(ProjectTemplateParser.fromYaml(
+                tenant, layout, config_template))
+
+        for config_project in config.projects.values():
+            layout.addProjectConfig(ProjectParser.fromYaml(
+                tenant, layout, config_project), update_pipeline=False)
+        return layout
diff --git a/zuul/connection/__init__.py b/zuul/connection/__init__.py
index 066b4db..6913294 100644
--- a/zuul/connection/__init__.py
+++ b/zuul/connection/__init__.py
@@ -34,23 +34,16 @@
     into. For example, a trigger will likely require some kind of query method
     while a reporter may need a review method."""
 
-    def __init__(self, connection_name, connection_config):
+    def __init__(self, driver, connection_name, connection_config):
         # connection_name is the name given to this connection in zuul.ini
         # connection_config is a dictionary of config_section from zuul.ini for
         # this connection.
         # __init__ shouldn't make the actual connection in case this connection
         # isn't used in the layout.
+        self.driver = driver
         self.connection_name = connection_name
         self.connection_config = connection_config
 
-        # Keep track of the sources, triggers and reporters using this
-        # connection
-        self.attached_to = {
-            'source': [],
-            'trigger': [],
-            'reporter': [],
-        }
-
     def onLoad(self):
         pass
 
@@ -60,9 +53,6 @@
     def registerScheduler(self, sched):
         self.sched = sched
 
-    def registerUse(self, what, instance):
-        self.attached_to[what].append(instance)
-
     def maintainCache(self, relevant):
         """Make cache contain relevant changes.
 
diff --git a/zuul/connection/gerrit.py b/zuul/connection/gerrit.py
deleted file mode 100644
index 6e8d085..0000000
--- a/zuul/connection/gerrit.py
+++ /dev/null
@@ -1,486 +0,0 @@
-# Copyright 2011 OpenStack, LLC.
-# Copyright 2012 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import threading
-import select
-import json
-import time
-from six.moves import queue as Queue
-from six.moves import urllib
-import paramiko
-import logging
-import pprint
-import voluptuous as v
-
-from zuul.connection import BaseConnection
-from zuul.model import TriggerEvent
-
-
-class GerritEventConnector(threading.Thread):
-    """Move events from Gerrit to the scheduler."""
-
-    log = logging.getLogger("zuul.GerritEventConnector")
-    delay = 10.0
-
-    def __init__(self, connection):
-        super(GerritEventConnector, self).__init__()
-        self.daemon = True
-        self.connection = connection
-        self._stopped = False
-
-    def stop(self):
-        self._stopped = True
-        self.connection.addEvent(None)
-
-    def _handleEvent(self):
-        ts, data = self.connection.getEvent()
-        if self._stopped:
-            return
-        # Gerrit can produce inconsistent data immediately after an
-        # event, So ensure that we do not deliver the event to Zuul
-        # until at least a certain amount of time has passed.  Note
-        # that if we receive several events in succession, we will
-        # only need to delay for the first event.  In essence, Zuul
-        # should always be a constant number of seconds behind Gerrit.
-        now = time.time()
-        time.sleep(max((ts + self.delay) - now, 0.0))
-        event = TriggerEvent()
-        event.type = data.get('type')
-        event.trigger_name = 'gerrit'
-        change = data.get('change')
-        if change:
-            event.project_name = change.get('project')
-            event.branch = change.get('branch')
-            event.change_number = str(change.get('number'))
-            event.change_url = change.get('url')
-            patchset = data.get('patchSet')
-            if patchset:
-                event.patch_number = patchset.get('number')
-                event.refspec = patchset.get('ref')
-            event.approvals = data.get('approvals', [])
-            event.comment = data.get('comment')
-        refupdate = data.get('refUpdate')
-        if refupdate:
-            event.project_name = refupdate.get('project')
-            event.ref = refupdate.get('refName')
-            event.oldrev = refupdate.get('oldRev')
-            event.newrev = refupdate.get('newRev')
-        # Map the event types to a field name holding a Gerrit
-        # account attribute. See Gerrit stream-event documentation
-        # in cmd-stream-events.html
-        accountfield_from_type = {
-            'patchset-created': 'uploader',
-            'draft-published': 'uploader',  # Gerrit 2.5/2.6
-            'change-abandoned': 'abandoner',
-            'change-restored': 'restorer',
-            'change-merged': 'submitter',
-            'merge-failed': 'submitter',  # Gerrit 2.5/2.6
-            'comment-added': 'author',
-            'ref-updated': 'submitter',
-            'reviewer-added': 'reviewer',  # Gerrit 2.5/2.6
-        }
-        try:
-            event.account = data.get(accountfield_from_type[event.type])
-        except KeyError:
-            self.log.warning("Received unrecognized event type '%s' from Gerrit.\
-                    Can not get account information." % event.type)
-            event.account = None
-
-        if (event.change_number and
-            self.connection.sched.getProject(event.project_name)):
-            # Call _getChange for the side effect of updating the
-            # cache.  Note that this modifies Change objects outside
-            # the main thread.
-            # NOTE(jhesketh): Ideally we'd just remove the change from the
-            # cache to denote that it needs updating. However the change
-            # object is already used by Item's and hence BuildSet's etc. and
-            # we need to update those objects by reference so that they have
-            # the correct/new information and also avoid hitting gerrit
-            # multiple times.
-            if self.connection.attached_to['source']:
-                self.connection.attached_to['source'][0]._getChange(
-                    event.change_number, event.patch_number, refresh=True)
-                # We only need to do this once since the connection maintains
-                # the cache (which is shared between all the sources)
-                # NOTE(jhesketh): We may couple sources and connections again
-                # at which point this becomes more sensible.
-        self.connection.sched.addEvent(event)
-
-    def run(self):
-        while True:
-            if self._stopped:
-                return
-            try:
-                self._handleEvent()
-            except:
-                self.log.exception("Exception moving Gerrit event:")
-            finally:
-                self.connection.eventDone()
-
-
-class GerritWatcher(threading.Thread):
-    log = logging.getLogger("gerrit.GerritWatcher")
-    poll_timeout = 500
-
-    def __init__(self, gerrit_connection, username, hostname, port=29418,
-                 keyfile=None, keepalive=60):
-        threading.Thread.__init__(self)
-        self.username = username
-        self.keyfile = keyfile
-        self.hostname = hostname
-        self.port = port
-        self.gerrit_connection = gerrit_connection
-        self.keepalive = keepalive
-        self._stopped = False
-
-    def _read(self, fd):
-        l = fd.readline()
-        data = json.loads(l)
-        self.log.debug("Received data from Gerrit event stream: \n%s" %
-                       pprint.pformat(data))
-        self.gerrit_connection.addEvent(data)
-
-    def _listen(self, stdout, stderr):
-        poll = select.poll()
-        poll.register(stdout.channel)
-        while not self._stopped:
-            ret = poll.poll(self.poll_timeout)
-            for (fd, event) in ret:
-                if fd == stdout.channel.fileno():
-                    if event == select.POLLIN:
-                        self._read(stdout)
-                    else:
-                        raise Exception("event on ssh connection")
-
-    def _run(self):
-        try:
-            client = paramiko.SSHClient()
-            client.load_system_host_keys()
-            client.set_missing_host_key_policy(paramiko.WarningPolicy())
-            client.connect(self.hostname,
-                           username=self.username,
-                           port=self.port,
-                           key_filename=self.keyfile)
-            transport = client.get_transport()
-            transport.set_keepalive(self.keepalive)
-
-            stdin, stdout, stderr = client.exec_command("gerrit stream-events")
-
-            self._listen(stdout, stderr)
-
-            if not stdout.channel.exit_status_ready():
-                # The stream-event is still running but we are done polling
-                # on stdout most likely due to being asked to stop.
-                # Try to stop the stream-events command sending Ctrl-C
-                stdin.write("\x03")
-                time.sleep(.2)
-                if not stdout.channel.exit_status_ready():
-                    # we're still not ready to exit, lets force the channel
-                    # closed now.
-                    stdout.channel.close()
-            ret = stdout.channel.recv_exit_status()
-            self.log.debug("SSH exit status: %s" % ret)
-            client.close()
-
-            if ret and ret not in [-1, 130]:
-                raise Exception("Gerrit error executing stream-events")
-        except:
-            self.log.exception("Exception on ssh event stream:")
-            time.sleep(5)
-
-    def run(self):
-        while not self._stopped:
-            self._run()
-
-    def stop(self):
-        self.log.debug("Stopping watcher")
-        self._stopped = True
-
-
-class GerritConnection(BaseConnection):
-    driver_name = 'gerrit'
-    log = logging.getLogger("zuul.GerritConnection")
-
-    def __init__(self, connection_name, connection_config):
-        super(GerritConnection, self).__init__(connection_name,
-                                               connection_config)
-        if 'server' not in self.connection_config:
-            raise Exception('server is required for gerrit connections in '
-                            '%s' % self.connection_name)
-        if 'user' not in self.connection_config:
-            raise Exception('user is required for gerrit connections in '
-                            '%s' % self.connection_name)
-
-        self.user = self.connection_config.get('user')
-        self.server = self.connection_config.get('server')
-        self.port = int(self.connection_config.get('port', 29418))
-        self.keyfile = self.connection_config.get('sshkey', None)
-        self.keepalive = int(self.connection_config.get('keepalive', 60))
-        self.watcher_thread = None
-        self.event_queue = None
-        self.client = None
-
-        self.baseurl = self.connection_config.get('baseurl',
-                                                  'https://%s' % self.server)
-
-        self._change_cache = {}
-        self.gerrit_event_connector = None
-
-    def getCachedChange(self, key):
-        if key in self._change_cache:
-            return self._change_cache.get(key)
-        return None
-
-    def updateChangeCache(self, key, value):
-        self._change_cache[key] = value
-
-    def deleteCachedChange(self, key):
-        if key in self._change_cache:
-            del self._change_cache[key]
-
-    def maintainCache(self, relevant):
-        # This lets the user supply a list of change objects that are
-        # still in use.  Anything in our cache that isn't in the supplied
-        # list should be safe to remove from the cache.
-        remove = []
-        for key, change in self._change_cache.items():
-            if change not in relevant:
-                remove.append(key)
-        for key in remove:
-            del self._change_cache[key]
-
-    def addEvent(self, data):
-        return self.event_queue.put((time.time(), data))
-
-    def getEvent(self):
-        return self.event_queue.get()
-
-    def eventDone(self):
-        self.event_queue.task_done()
-
-    def review(self, project, change, message, action={}):
-        cmd = 'gerrit review --project %s' % project
-        if message:
-            cmd += ' --message "%s"' % message
-        for key, val in action.items():
-            if val is True:
-                cmd += ' --%s' % key
-            else:
-                cmd += ' --%s %s' % (key, val)
-        cmd += ' %s' % change
-        out, err = self._ssh(cmd)
-        return err
-
-    def query(self, query):
-        args = '--all-approvals --comments --commit-message'
-        args += ' --current-patch-set --dependencies --files'
-        args += ' --patch-sets --submit-records'
-        cmd = 'gerrit query --format json %s %s' % (
-            args, query)
-        out, err = self._ssh(cmd)
-        if not out:
-            return False
-        lines = out.split('\n')
-        if not lines:
-            return False
-        data = json.loads(lines[0])
-        if not data:
-            return False
-        self.log.debug("Received data from Gerrit query: \n%s" %
-                       (pprint.pformat(data)))
-        return data
-
-    def simpleQuery(self, query):
-        def _query_chunk(query):
-            args = '--commit-message --current-patch-set'
-
-            cmd = 'gerrit query --format json %s %s' % (
-                args, query)
-            out, err = self._ssh(cmd)
-            if not out:
-                return False
-            lines = out.split('\n')
-            if not lines:
-                return False
-
-            # filter out blank lines
-            data = [json.loads(line) for line in lines
-                    if line.startswith('{')]
-
-            # check last entry for more changes
-            more_changes = None
-            if 'moreChanges' in data[-1]:
-                more_changes = data[-1]['moreChanges']
-
-            # we have to remove the statistics line
-            del data[-1]
-
-            if not data:
-                return False, more_changes
-            self.log.debug("Received data from Gerrit query: \n%s" %
-                           (pprint.pformat(data)))
-            return data, more_changes
-
-        # gerrit returns 500 results by default, so implement paging
-        # for large projects like nova
-        alldata = []
-        chunk, more_changes = _query_chunk(query)
-        while(chunk):
-            alldata.extend(chunk)
-            if more_changes is None:
-                # continue sortKey based (before Gerrit 2.9)
-                resume = "resume_sortkey:'%s'" % chunk[-1]["sortKey"]
-            elif more_changes:
-                # continue moreChanges based (since Gerrit 2.9)
-                resume = "-S %d" % len(alldata)
-            else:
-                # no more changes
-                break
-
-            chunk, more_changes = _query_chunk("%s %s" % (query, resume))
-        return alldata
-
-    def _open(self):
-        client = paramiko.SSHClient()
-        client.load_system_host_keys()
-        client.set_missing_host_key_policy(paramiko.WarningPolicy())
-        client.connect(self.server,
-                       username=self.user,
-                       port=self.port,
-                       key_filename=self.keyfile)
-        transport = client.get_transport()
-        transport.set_keepalive(self.keepalive)
-        self.client = client
-
-    def _ssh(self, command, stdin_data=None):
-        if not self.client:
-            self._open()
-
-        try:
-            self.log.debug("SSH command:\n%s" % command)
-            stdin, stdout, stderr = self.client.exec_command(command)
-        except:
-            self._open()
-            stdin, stdout, stderr = self.client.exec_command(command)
-
-        if stdin_data:
-            stdin.write(stdin_data)
-
-        out = stdout.read()
-        self.log.debug("SSH received stdout:\n%s" % out)
-
-        ret = stdout.channel.recv_exit_status()
-        self.log.debug("SSH exit status: %s" % ret)
-
-        err = stderr.read()
-        self.log.debug("SSH received stderr:\n%s" % err)
-        if ret:
-            raise Exception("Gerrit error executing %s" % command)
-        return (out, err)
-
-    def getInfoRefs(self, project):
-        url = "%s/p/%s/info/refs?service=git-upload-pack" % (
-            self.baseurl, project)
-        try:
-            data = urllib.request.urlopen(url).read()
-        except:
-            self.log.error("Cannot get references from %s" % url)
-            raise  # keeps urllib error informations
-        ret = {}
-        read_headers = False
-        read_advertisement = False
-        if data[4] != '#':
-            raise Exception("Gerrit repository does not support "
-                            "git-upload-pack")
-        i = 0
-        while i < len(data):
-            if len(data) - i < 4:
-                raise Exception("Invalid length in info/refs")
-            plen = int(data[i:i + 4], 16)
-            i += 4
-            # It's the length of the packet, including the 4 bytes of the
-            # length itself, unless it's null, in which case the length is
-            # not included.
-            if plen > 0:
-                plen -= 4
-            if len(data) - i < plen:
-                raise Exception("Invalid data in info/refs")
-            line = data[i:i + plen]
-            i += plen
-            if not read_headers:
-                if plen == 0:
-                    read_headers = True
-                continue
-            if not read_advertisement:
-                read_advertisement = True
-                continue
-            if plen == 0:
-                # The terminating null
-                continue
-            line = line.strip()
-            revision, ref = line.split()
-            ret[ref] = revision
-        return ret
-
-    def getGitUrl(self, project):
-        url = 'ssh://%s@%s:%s/%s' % (self.user, self.server, self.port,
-                                     project.name)
-        return url
-
-    def getGitwebUrl(self, project, sha=None):
-        url = '%s/gitweb?p=%s.git' % (self.baseurl, project)
-        if sha:
-            url += ';a=commitdiff;h=' + sha
-        return url
-
-    def onLoad(self):
-        self.log.debug("Starting Gerrit Conncetion/Watchers")
-        self._start_watcher_thread()
-        self._start_event_connector()
-
-    def onStop(self):
-        self.log.debug("Stopping Gerrit Conncetion/Watchers")
-        self._stop_watcher_thread()
-        self._stop_event_connector()
-
-    def _stop_watcher_thread(self):
-        if self.watcher_thread:
-            self.watcher_thread.stop()
-            self.watcher_thread.join()
-
-    def _start_watcher_thread(self):
-        self.event_queue = Queue.Queue()
-        self.watcher_thread = GerritWatcher(
-            self,
-            self.user,
-            self.server,
-            self.port,
-            keyfile=self.keyfile,
-            keepalive=self.keepalive)
-        self.watcher_thread.start()
-
-    def _stop_event_connector(self):
-        if self.gerrit_event_connector:
-            self.gerrit_event_connector.stop()
-            self.gerrit_event_connector.join()
-
-    def _start_event_connector(self):
-        self.gerrit_event_connector = GerritEventConnector(self)
-        self.gerrit_event_connector.start()
-
-
-def getSchema():
-    gerrit_connection = v.Any(str, v.Schema({}, extra=True))
-    return gerrit_connection
diff --git a/zuul/driver/__init__.py b/zuul/driver/__init__.py
new file mode 100644
index 0000000..36e83bd
--- /dev/null
+++ b/zuul/driver/__init__.py
@@ -0,0 +1,218 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import abc
+
+import six
+
+
+@six.add_metaclass(abc.ABCMeta)
+class Driver(object):
+    """A Driver is an extension component of Zuul that supports
+    interfacing with a remote system.  It can support any of the following
+    interfaces (but must support at least one to be useful):
+
+    * ConnectionInterface
+    * SourceInterface
+    * TriggerInterface
+    * ReporterInterface
+
+    Zuul will create a single instance of each Driver (which will be
+    shared by all tenants), and this instance will persist for the life of
+    the process.  The Driver class may therefore manage any global state
+    used by all connections.
+
+    The class or instance attribute **name** must be provided as a string.
+
+    """
+    name = None
+
+    def reconfigure(self, tenant):
+        """Called when a tenant is reconfigured.
+
+        This method is optional; the base implementation does nothing.
+
+        When Zuul performs a reconfiguration for a tenant, this method
+        is called with the tenant (including the new layout
+        configuration) as an argument.  The driver may establish any
+        global resources needed by the tenant at this point.
+
+        :arg Tenant tenant: The :py:class:`zuul.model.Tenant` which has been
+            reconfigured.
+
+        """
+        pass
+
+    def registerScheduler(self, scheduler):
+        """Register the scheduler with the driver.
+
+        This method is optional; the base implementation does nothing.
+
+        This method is called once during initialization to allow the
+        driver to store a handle to the running scheduler.
+
+        :arg Scheduler scheduler: The current running
+           :py:class:`zuul.scheduler.Scheduler`.
+
+        """
+        pass
+
+
+@six.add_metaclass(abc.ABCMeta)
+class ConnectionInterface(object):
+    """The Connection interface.
+
+    A driver which is able to supply a Connection should implement
+    this interface.
+
+    """
+
+    @abc.abstractmethod
+    def getConnection(self, name, config):
+        """Create and return a new Connection object.
+
+        This method is required by the interface.
+
+        This method will be called once for each connection specified
+        in zuul.conf.  The resultant object should be responsible for
+        establishing any long-lived connections to remote systems.  If
+        Zuul is reconfigured, all existing connections will be stopped
+        and this method will be called again for any new connections
+        which should be created.
+
+        When a connection is specified in zuul.conf with a name, that
+        name is used here when creating the connection, and it is also
+        used in the layout to attach triggers and reporters to the
+        named connection.  If the Driver does not utilize a connection
+        (because it does not interact with a remote system), do not
+        implement this method and Zuul will automatically associate
+        triggers and reporters with the name of the Driver itself
+        where it would normally expect the name of a connection.
+
+        :arg str name: The name of the connection.  This is the name
+            supplied in the zuul.conf file where the connection is
+            configured.
+        :arg dict config: The configuration information supplied along
+            with the connection in zuul.conf.
+
+        :returns: A new Connection object.
+        :rtype: Connection
+
+        """
+        pass
+
+
+@six.add_metaclass(abc.ABCMeta)
+class TriggerInterface(object):
+    """The trigger interface.
+
+    A driver which is able to supply a Trigger should implement this
+    interface.
+
+    """
+
+    @abc.abstractmethod
+    def getTrigger(self, connection, config=None):
+        """Create and return a new Trigger object.
+
+        This method is required by the interface.
+
+        :arg Connection connection: The Connection object associated
+            with the trigger (as previously returned by getConnection)
+            or None.
+        :arg dict config: The configuration information supplied along
+            with the trigger in the layout.
+
+        :returns: A new Trigger object.
+        :rtype: Trigger
+
+        """
+        pass
+
+    @abc.abstractmethod
+    def getTriggerSchema(self):
+        """Get the schema for this driver's trigger.
+
+        This method is required by the interface.
+
+        :returns: A voluptuous schema.
+        :rtype: dict or Schema
+
+        """
+        pass
+
+
+@six.add_metaclass(abc.ABCMeta)
+class SourceInterface(object):
+    """The source interface to be implemented by a driver.
+
+    A driver which is able to supply a Source should implement this
+    interface.
+
+    """
+
+    @abc.abstractmethod
+    def getSource(self, connection):
+        """Create and return a new Source object.
+
+        This method is required by the interface.
+
+        :arg Connection connection: The Connection object associated
+            with the source (as previously returned by getConnection).
+
+        :returns: A new Source object.
+        :rtype: Source
+
+        """
+        pass
+
+
+@six.add_metaclass(abc.ABCMeta)
+class ReporterInterface(object):
+    """The reporter interface to be implemented by a driver.
+
+    A driver which is able to supply a Reporter should implement this
+    interface.
+
+    """
+
+    @abc.abstractmethod
+    def getReporter(self, connection, config=None):
+        """Create and return a new Reporter object.
+
+        This method is required by the interface.
+
+        :arg Connection connection: The Connection object associated
+            with the reporter (as previously returned by getConnection)
+            or None.
+        :arg dict config: The configuration information supplied along
+            with the reporter in the layout.
+
+        :returns: A new Reporter object.
+        :rtype: Reporter
+
+        """
+        pass
+
+    @abc.abstractmethod
+    def getReporterSchema(self):
+        """Get the schema for this driver's reporter.
+
+        This method is required by the interface.
+
+        :returns: A voluptuous schema.
+        :rtype: dict or Schema
+
+        """
+        pass
diff --git a/zuul/driver/gerrit/__init__.py b/zuul/driver/gerrit/__init__.py
new file mode 100644
index 0000000..3bc371e
--- /dev/null
+++ b/zuul/driver/gerrit/__init__.py
@@ -0,0 +1,43 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from zuul.driver import Driver, ConnectionInterface, TriggerInterface
+from zuul.driver import SourceInterface, ReporterInterface
+import gerritconnection
+import gerrittrigger
+import gerritsource
+import gerritreporter
+
+
+class GerritDriver(Driver, ConnectionInterface, TriggerInterface,
+                   SourceInterface, ReporterInterface):
+    name = 'gerrit'
+
+    def getConnection(self, name, config):
+        return gerritconnection.GerritConnection(self, name, config)
+
+    def getTrigger(self, connection, config=None):
+        return gerrittrigger.GerritTrigger(self, connection, config)
+
+    def getSource(self, connection):
+        return gerritsource.GerritSource(self, connection)
+
+    def getReporter(self, connection, config=None):
+        return gerritreporter.GerritReporter(self, connection, config)
+
+    def getTriggerSchema(self):
+        return gerrittrigger.getSchema()
+
+    def getReporterSchema(self):
+        return gerritreporter.getSchema()
diff --git a/zuul/driver/gerrit/gerritconnection.py b/zuul/driver/gerrit/gerritconnection.py
new file mode 100644
index 0000000..286006f
--- /dev/null
+++ b/zuul/driver/gerrit/gerritconnection.py
@@ -0,0 +1,811 @@
+# Copyright 2011 OpenStack, LLC.
+# Copyright 2012 Hewlett-Packard Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import json
+import re
+import select
+import threading
+import time
+from six.moves import queue as Queue
+from six.moves import urllib
+import paramiko
+import logging
+import pprint
+import voluptuous as v
+
+from zuul.connection import BaseConnection
+from zuul.model import TriggerEvent, Project, Change, Ref, NullChange
+from zuul import exceptions
+
+
+# Walk the change dependency tree to find a cycle
+def detect_cycle(change, history=None):
+    if history is None:
+        history = []
+    else:
+        history = history[:]
+    history.append(change.number)
+    for dep in change.needs_changes:
+        if dep.number in history:
+            raise Exception("Dependency cycle detected: %s in %s" % (
+                dep.number, history))
+        detect_cycle(dep, history)
+
+
+class GerritEventConnector(threading.Thread):
+    """Move events from Gerrit to the scheduler."""
+
+    log = logging.getLogger("zuul.GerritEventConnector")
+    delay = 10.0
+
+    def __init__(self, connection):
+        super(GerritEventConnector, self).__init__()
+        self.daemon = True
+        self.connection = connection
+        self._stopped = False
+
+    def stop(self):
+        self._stopped = True
+        self.connection.addEvent(None)
+
+    def _handleEvent(self):
+        ts, data = self.connection.getEvent()
+        if self._stopped:
+            return
+        # Gerrit can produce inconsistent data immediately after an
+        # event, So ensure that we do not deliver the event to Zuul
+        # until at least a certain amount of time has passed.  Note
+        # that if we receive several events in succession, we will
+        # only need to delay for the first event.  In essence, Zuul
+        # should always be a constant number of seconds behind Gerrit.
+        now = time.time()
+        time.sleep(max((ts + self.delay) - now, 0.0))
+        event = TriggerEvent()
+        event.type = data.get('type')
+        event.trigger_name = 'gerrit'
+        change = data.get('change')
+        if change:
+            event.project_name = change.get('project')
+            event.branch = change.get('branch')
+            event.change_number = str(change.get('number'))
+            event.change_url = change.get('url')
+            patchset = data.get('patchSet')
+            if patchset:
+                event.patch_number = patchset.get('number')
+                event.refspec = patchset.get('ref')
+            event.approvals = data.get('approvals', [])
+            event.comment = data.get('comment')
+        refupdate = data.get('refUpdate')
+        if refupdate:
+            event.project_name = refupdate.get('project')
+            event.ref = refupdate.get('refName')
+            event.oldrev = refupdate.get('oldRev')
+            event.newrev = refupdate.get('newRev')
+        # Map the event types to a field name holding a Gerrit
+        # account attribute. See Gerrit stream-event documentation
+        # in cmd-stream-events.html
+        accountfield_from_type = {
+            'patchset-created': 'uploader',
+            'draft-published': 'uploader',  # Gerrit 2.5/2.6
+            'change-abandoned': 'abandoner',
+            'change-restored': 'restorer',
+            'change-merged': 'submitter',
+            'merge-failed': 'submitter',  # Gerrit 2.5/2.6
+            'comment-added': 'author',
+            'ref-updated': 'submitter',
+            'reviewer-added': 'reviewer',  # Gerrit 2.5/2.6
+            'ref-replicated': None,
+            'ref-replication-done': None,
+            'topic-changed': 'changer',
+        }
+        event.account = None
+        if event.type in accountfield_from_type:
+            field = accountfield_from_type[event.type]
+            if field:
+                event.account = data.get(accountfield_from_type[event.type])
+        else:
+            self.log.warning("Received unrecognized event type '%s' "
+                             "from Gerrit. Can not get account information." %
+                             (event.type,))
+
+        if event.change_number:
+            # TODO(jhesketh): Check if the project exists?
+            # and self.connection.sched.getProject(event.project_name):
+
+            # Call _getChange for the side effect of updating the
+            # cache.  Note that this modifies Change objects outside
+            # the main thread.
+            # NOTE(jhesketh): Ideally we'd just remove the change from the
+            # cache to denote that it needs updating. However the change
+            # object is already used by Items and hence BuildSets etc. and
+            # we need to update those objects by reference so that they have
+            # the correct/new information and also avoid hitting gerrit
+            # multiple times.
+            self.connection._getChange(event.change_number,
+                                       event.patch_number,
+                                       refresh=True)
+        self.connection.sched.addEvent(event)
+
+    def run(self):
+        while True:
+            if self._stopped:
+                return
+            try:
+                self._handleEvent()
+            except:
+                self.log.exception("Exception moving Gerrit event:")
+            finally:
+                self.connection.eventDone()
+
+
+class GerritWatcher(threading.Thread):
+    log = logging.getLogger("gerrit.GerritWatcher")
+    poll_timeout = 500
+
+    def __init__(self, gerrit_connection, username, hostname, port=29418,
+                 keyfile=None, keepalive=60):
+        threading.Thread.__init__(self)
+        self.username = username
+        self.keyfile = keyfile
+        self.hostname = hostname
+        self.port = port
+        self.gerrit_connection = gerrit_connection
+        self.keepalive = keepalive
+        self._stopped = False
+
+    def _read(self, fd):
+        l = fd.readline()
+        data = json.loads(l)
+        self.log.debug("Received data from Gerrit event stream: \n%s" %
+                       pprint.pformat(data))
+        self.gerrit_connection.addEvent(data)
+
+    def _listen(self, stdout, stderr):
+        poll = select.poll()
+        poll.register(stdout.channel)
+        while not self._stopped:
+            ret = poll.poll(self.poll_timeout)
+            for (fd, event) in ret:
+                if fd == stdout.channel.fileno():
+                    if event == select.POLLIN:
+                        self._read(stdout)
+                    else:
+                        raise Exception("event on ssh connection")
+
+    def _run(self):
+        try:
+            client = paramiko.SSHClient()
+            client.load_system_host_keys()
+            client.set_missing_host_key_policy(paramiko.WarningPolicy())
+            client.connect(self.hostname,
+                           username=self.username,
+                           port=self.port,
+                           key_filename=self.keyfile)
+            transport = client.get_transport()
+            transport.set_keepalive(self.keepalive)
+
+            stdin, stdout, stderr = client.exec_command("gerrit stream-events")
+
+            self._listen(stdout, stderr)
+
+            if not stdout.channel.exit_status_ready():
+                # The stream-event is still running but we are done polling
+                # on stdout most likely due to being asked to stop.
+                # Try to stop the stream-events command sending Ctrl-C
+                stdin.write("\x03")
+                time.sleep(.2)
+                if not stdout.channel.exit_status_ready():
+                    # we're still not ready to exit, lets force the channel
+                    # closed now.
+                    stdout.channel.close()
+            ret = stdout.channel.recv_exit_status()
+            self.log.debug("SSH exit status: %s" % ret)
+            client.close()
+
+            if ret and ret not in [-1, 130]:
+                raise Exception("Gerrit error executing stream-events")
+        except:
+            self.log.exception("Exception on ssh event stream:")
+            time.sleep(5)
+
+    def run(self):
+        while not self._stopped:
+            self._run()
+
+    def stop(self):
+        self.log.debug("Stopping watcher")
+        self._stopped = True
+
+
+class GerritConnection(BaseConnection):
+    driver_name = 'gerrit'
+    log = logging.getLogger("zuul.GerritConnection")
+    depends_on_re = re.compile(r"^Depends-On: (I[0-9a-f]{40})\s*$",
+                               re.MULTILINE | re.IGNORECASE)
+    replication_timeout = 300
+    replication_retry_interval = 5
+
+    def __init__(self, driver, connection_name, connection_config):
+        super(GerritConnection, self).__init__(driver, connection_name,
+                                               connection_config)
+        if 'server' not in self.connection_config:
+            raise Exception('server is required for gerrit connections in '
+                            '%s' % self.connection_name)
+        if 'user' not in self.connection_config:
+            raise Exception('user is required for gerrit connections in '
+                            '%s' % self.connection_name)
+
+        self.user = self.connection_config.get('user')
+        self.server = self.connection_config.get('server')
+        self.port = int(self.connection_config.get('port', 29418))
+        self.keyfile = self.connection_config.get('sshkey', None)
+        self.keepalive = int(self.connection_config.get('keepalive', 60))
+        self.watcher_thread = None
+        self.event_queue = Queue.Queue()
+        self.client = None
+
+        self.baseurl = self.connection_config.get('baseurl',
+                                                  'https://%s' % self.server)
+
+        self._change_cache = {}
+        self.projects = {}
+        self.gerrit_event_connector = None
+
+    def getProject(self, name):
+        if name not in self.projects:
+            self.projects[name] = Project(name, self.connection_name)
+        return self.projects[name]
+
+    def maintainCache(self, relevant):
+        # This lets the user supply a list of change objects that are
+        # still in use.  Anything in our cache that isn't in the supplied
+        # list should be safe to remove from the cache.
+        remove = []
+        for key, change in self._change_cache.items():
+            if change not in relevant:
+                remove.append(key)
+        for key in remove:
+            del self._change_cache[key]
+
+    def getChange(self, event, refresh=False):
+        if event.change_number:
+            change = self._getChange(event.change_number, event.patch_number,
+                                     refresh=refresh)
+        elif event.ref:
+            project = self.getProject(event.project_name)
+            change = Ref(project)
+            change.ref = event.ref
+            change.oldrev = event.oldrev
+            change.newrev = event.newrev
+            change.url = self._getGitwebUrl(project, sha=event.newrev)
+        else:
+            project = self.getProject(event.project_name)
+            change = NullChange(project)
+        return change
+
+    def _getChange(self, number, patchset, refresh=False, history=None):
+        key = '%s,%s' % (number, patchset)
+        change = self._change_cache.get(key)
+        if change and not refresh:
+            return change
+        if not change:
+            change = Change(None)
+            change.number = number
+            change.patchset = patchset
+        key = '%s,%s' % (change.number, change.patchset)
+        self._change_cache[key] = change
+        try:
+            self._updateChange(change, history)
+        except Exception:
+            if key in self._change_cache:
+                del self._change_cache[key]
+            raise
+        return change
+
+    def _getDependsOnFromCommit(self, message, change):
+        records = []
+        seen = set()
+        for match in self.depends_on_re.findall(message):
+            if match in seen:
+                self.log.debug("Ignoring duplicate Depends-On: %s" %
+                               (match,))
+                continue
+            seen.add(match)
+            query = "change:%s" % (match,)
+            self.log.debug("Updating %s: Running query %s "
+                           "to find needed changes" %
+                           (change, query,))
+            records.extend(self.simpleQuery(query))
+        return records
+
+    def _getNeededByFromCommit(self, change_id, change):
+        records = []
+        seen = set()
+        query = 'message:%s' % change_id
+        self.log.debug("Updating %s: Running query %s "
+                       "to find changes needed-by" %
+                       (change, query,))
+        results = self.simpleQuery(query)
+        for result in results:
+            for match in self.depends_on_re.findall(
+                result['commitMessage']):
+                if match != change_id:
+                    continue
+                key = (result['number'], result['currentPatchSet']['number'])
+                if key in seen:
+                    continue
+                self.log.debug("Updating %s: Found change %s,%s "
+                               "needs %s from commit" %
+                               (change, key[0], key[1], change_id))
+                seen.add(key)
+                records.append(result)
+        return records
+
+    def _updateChange(self, change, history=None):
+        self.log.info("Updating %s" % (change,))
+        data = self.query(change.number)
+        change._data = data
+
+        if change.patchset is None:
+            change.patchset = data['currentPatchSet']['number']
+
+        if 'project' not in data:
+            raise exceptions.ChangeNotFound(change.number, change.patchset)
+        change.project = self.getProject(data['project'])
+        change.branch = data['branch']
+        change.url = data['url']
+        max_ps = 0
+        files = []
+        for ps in data['patchSets']:
+            if ps['number'] == change.patchset:
+                change.refspec = ps['ref']
+                for f in ps.get('files', []):
+                    files.append(f['file'])
+            if int(ps['number']) > int(max_ps):
+                max_ps = ps['number']
+        if max_ps == change.patchset:
+            change.is_current_patchset = True
+        else:
+            change.is_current_patchset = False
+        change.files = files
+
+        change.is_merged = self._isMerged(change)
+        change.approvals = data['currentPatchSet'].get('approvals', [])
+        change.open = data['open']
+        change.status = data['status']
+        change.owner = data['owner']
+
+        if change.is_merged:
+            # This change is merged, so we don't need to look any further
+            # for dependencies.
+            self.log.debug("Updating %s: change is merged" % (change,))
+            return change
+
+        if history is None:
+            history = []
+        else:
+            history = history[:]
+        history.append(change.number)
+
+        needs_changes = []
+        if 'dependsOn' in data:
+            parts = data['dependsOn'][0]['ref'].split('/')
+            dep_num, dep_ps = parts[3], parts[4]
+            if dep_num in history:
+                raise Exception("Dependency cycle detected: %s in %s" % (
+                    dep_num, history))
+            self.log.debug("Updating %s: Getting git-dependent change %s,%s" %
+                           (change, dep_num, dep_ps))
+            dep = self._getChange(dep_num, dep_ps, history=history)
+            # Because we are not forcing a refresh in _getChange, it
+            # may return without executing this code, so if we are
+            # updating our change to add ourselves to a dependency
+            # cycle, we won't detect it.  By explicitly performing a
+            # walk of the dependency tree, we will.
+            detect_cycle(dep, history)
+            if (not dep.is_merged) and dep not in needs_changes:
+                needs_changes.append(dep)
+
+        for record in self._getDependsOnFromCommit(data['commitMessage'],
+                                                   change):
+            dep_num = record['number']
+            dep_ps = record['currentPatchSet']['number']
+            if dep_num in history:
+                raise Exception("Dependency cycle detected: %s in %s" % (
+                    dep_num, history))
+            self.log.debug("Updating %s: Getting commit-dependent "
+                           "change %s,%s" %
+                           (change, dep_num, dep_ps))
+            dep = self._getChange(dep_num, dep_ps, history=history)
+            # Because we are not forcing a refresh in _getChange, it
+            # may return without executing this code, so if we are
+            # updating our change to add ourselves to a dependency
+            # cycle, we won't detect it.  By explicitly performing a
+            # walk of the dependency tree, we will.
+            detect_cycle(dep, history)
+            if (not dep.is_merged) and dep not in needs_changes:
+                needs_changes.append(dep)
+        change.needs_changes = needs_changes
+
+        needed_by_changes = []
+        if 'neededBy' in data:
+            for needed in data['neededBy']:
+                parts = needed['ref'].split('/')
+                dep_num, dep_ps = parts[3], parts[4]
+                self.log.debug("Updating %s: Getting git-needed change %s,%s" %
+                               (change, dep_num, dep_ps))
+                dep = self._getChange(dep_num, dep_ps)
+                if (not dep.is_merged) and dep.is_current_patchset:
+                    needed_by_changes.append(dep)
+
+        for record in self._getNeededByFromCommit(data['id'], change):
+            dep_num = record['number']
+            dep_ps = record['currentPatchSet']['number']
+            self.log.debug("Updating %s: Getting commit-needed change %s,%s" %
+                           (change, dep_num, dep_ps))
+            # Because a commit needed-by may be a cross-repo
+            # dependency, cause that change to refresh so that it will
+            # reference the latest patchset of its Depends-On (this
+            # change).
+            dep = self._getChange(dep_num, dep_ps, refresh=True)
+            if (not dep.is_merged) and dep.is_current_patchset:
+                needed_by_changes.append(dep)
+        change.needed_by_changes = needed_by_changes
+
+        return change
+
+    def isMerged(self, change, head=None):
+        self.log.debug("Checking if change %s is merged" % change)
+        if not change.number:
+            self.log.debug("Change has no number; considering it merged")
+            # Good question.  It's probably ref-updated, which, ah,
+            # means it's merged.
+            return True
+
+        data = self.query(change.number)
+        change._data = data
+        change.is_merged = self._isMerged(change)
+        if change.is_merged:
+            self.log.debug("Change %s is merged" % (change,))
+        else:
+            self.log.debug("Change %s is not merged" % (change,))
+        if not head:
+            return change.is_merged
+        if not change.is_merged:
+            return False
+
+        ref = 'refs/heads/' + change.branch
+        self.log.debug("Waiting for %s to appear in git repo" % (change))
+        if self._waitForRefSha(change.project, ref, change._ref_sha):
+            self.log.debug("Change %s is in the git repo" %
+                           (change))
+            return True
+        self.log.debug("Change %s did not appear in the git repo" %
+                       (change))
+        return False
+
+    def _isMerged(self, change):
+        data = change._data
+        if not data:
+            return False
+        status = data.get('status')
+        if not status:
+            return False
+        if status == 'MERGED':
+            return True
+        return False
+
+    def _waitForRefSha(self, project, ref, old_sha=''):
+        # Wait for the ref to show up in the repo
+        start = time.time()
+        while time.time() - start < self.replication_timeout:
+            sha = self.getRefSha(project.name, ref)
+            if old_sha != sha:
+                return True
+            time.sleep(self.replication_retry_interval)
+        return False
+
+    def getRefSha(self, project_name, ref):
+        refs = {}
+        try:
+            refs = self.getInfoRefs(project_name)
+        except:
+            self.log.exception("Exception looking for ref %s" %
+                               ref)
+        sha = refs.get(ref, '')
+        return sha
+
+    def canMerge(self, change, allow_needs):
+        if not change.number:
+            self.log.debug("Change has no number; considering it merged")
+            # Good question.  It's probably ref-updated, which, ah,
+            # means it's merged.
+            return True
+        data = change._data
+        if not data:
+            return False
+        if 'submitRecords' not in data:
+            return False
+        try:
+            for sr in data['submitRecords']:
+                if sr['status'] == 'OK':
+                    return True
+                elif sr['status'] == 'NOT_READY':
+                    for label in sr['labels']:
+                        if label['status'] in ['OK', 'MAY']:
+                            continue
+                        elif label['status'] in ['NEED', 'REJECT']:
+                            # It may be our own rejection, so we ignore
+                            if label['label'].lower() not in allow_needs:
+                                return False
+                            continue
+                        else:
+                            # IMPOSSIBLE
+                            return False
+                else:
+                    # CLOSED, RULE_ERROR
+                    return False
+        except:
+            self.log.exception("Exception determining whether change"
+                               "%s can merge:" % change)
+            return False
+        return True
+
+    def getProjectOpenChanges(self, project):
+        # This is a best-effort function in case Gerrit is unable to return
+        # a particular change.  It happens.
+        query = "project:%s status:open" % (project.name,)
+        self.log.debug("Running query %s to get project open changes" %
+                       (query,))
+        data = self.simpleQuery(query)
+        changes = []
+        for record in data:
+            try:
+                changes.append(
+                    self._getChange(record['number'],
+                                    record['currentPatchSet']['number']))
+            except Exception:
+                self.log.exception("Unable to query change %s" %
+                                   (record.get('number'),))
+        return changes
+
+    def getProjectBranches(self, project):
+        refs = self.getInfoRefs(project.name)
+        heads = [str(k[len('refs/heads/'):]) for k in refs.keys()
+                 if k.startswith('refs/heads/')]
+        return heads
+
+    def addEvent(self, data):
+        return self.event_queue.put((time.time(), data))
+
+    def getEvent(self):
+        return self.event_queue.get()
+
+    def eventDone(self):
+        self.event_queue.task_done()
+
+    def review(self, project, change, message, action={}):
+        cmd = 'gerrit review --project %s' % project
+        if message:
+            cmd += ' --message "%s"' % message
+        for key, val in action.items():
+            if val is True:
+                cmd += ' --%s' % key
+            else:
+                cmd += ' --%s %s' % (key, val)
+        cmd += ' %s' % change
+        out, err = self._ssh(cmd)
+        return err
+
+    def query(self, query):
+        args = '--all-approvals --comments --commit-message'
+        args += ' --current-patch-set --dependencies --files'
+        args += ' --patch-sets --submit-records'
+        cmd = 'gerrit query --format json %s %s' % (
+            args, query)
+        out, err = self._ssh(cmd)
+        if not out:
+            return False
+        lines = out.split('\n')
+        if not lines:
+            return False
+        data = json.loads(lines[0])
+        if not data:
+            return False
+        self.log.debug("Received data from Gerrit query: \n%s" %
+                       (pprint.pformat(data)))
+        return data
+
+    def simpleQuery(self, query):
+        def _query_chunk(query):
+            args = '--commit-message --current-patch-set'
+
+            cmd = 'gerrit query --format json %s %s' % (
+                args, query)
+            out, err = self._ssh(cmd)
+            if not out:
+                return False
+            lines = out.split('\n')
+            if not lines:
+                return False
+
+            # filter out blank lines
+            data = [json.loads(line) for line in lines
+                    if line.startswith('{')]
+
+            # check last entry for more changes
+            more_changes = None
+            if 'moreChanges' in data[-1]:
+                more_changes = data[-1]['moreChanges']
+
+            # we have to remove the statistics line
+            del data[-1]
+
+            if not data:
+                return False, more_changes
+            self.log.debug("Received data from Gerrit query: \n%s" %
+                           (pprint.pformat(data)))
+            return data, more_changes
+
+        # gerrit returns 500 results by default, so implement paging
+        # for large projects like nova
+        alldata = []
+        chunk, more_changes = _query_chunk(query)
+        while(chunk):
+            alldata.extend(chunk)
+            if more_changes is None:
+                # continue sortKey based (before Gerrit 2.9)
+                resume = "resume_sortkey:'%s'" % chunk[-1]["sortKey"]
+            elif more_changes:
+                # continue moreChanges based (since Gerrit 2.9)
+                resume = "-S %d" % len(alldata)
+            else:
+                # no more changes
+                break
+
+            chunk, more_changes = _query_chunk("%s %s" % (query, resume))
+        return alldata
+
+    def _open(self):
+        client = paramiko.SSHClient()
+        client.load_system_host_keys()
+        client.set_missing_host_key_policy(paramiko.WarningPolicy())
+        client.connect(self.server,
+                       username=self.user,
+                       port=self.port,
+                       key_filename=self.keyfile)
+        transport = client.get_transport()
+        transport.set_keepalive(self.keepalive)
+        self.client = client
+
+    def _ssh(self, command, stdin_data=None):
+        if not self.client:
+            self._open()
+
+        try:
+            self.log.debug("SSH command:\n%s" % command)
+            stdin, stdout, stderr = self.client.exec_command(command)
+        except:
+            self._open()
+            stdin, stdout, stderr = self.client.exec_command(command)
+
+        if stdin_data:
+            stdin.write(stdin_data)
+
+        out = stdout.read()
+        self.log.debug("SSH received stdout:\n%s" % out)
+
+        ret = stdout.channel.recv_exit_status()
+        self.log.debug("SSH exit status: %s" % ret)
+
+        err = stderr.read()
+        self.log.debug("SSH received stderr:\n%s" % err)
+        if ret:
+            raise Exception("Gerrit error executing %s" % command)
+        return (out, err)
+
+    def getInfoRefs(self, project_name):
+        url = "%s/p/%s/info/refs?service=git-upload-pack" % (
+            self.baseurl, project_name)
+        try:
+            data = urllib.request.urlopen(url).read()
+        except:
+            self.log.error("Cannot get references from %s" % url)
+            raise  # keeps urllib error informations
+        ret = {}
+        read_headers = False
+        read_advertisement = False
+        if data[4] != '#':
+            raise Exception("Gerrit repository does not support "
+                            "git-upload-pack")
+        i = 0
+        while i < len(data):
+            if len(data) - i < 4:
+                raise Exception("Invalid length in info/refs")
+            plen = int(data[i:i + 4], 16)
+            i += 4
+            # It's the length of the packet, including the 4 bytes of the
+            # length itself, unless it's null, in which case the length is
+            # not included.
+            if plen > 0:
+                plen -= 4
+            if len(data) - i < plen:
+                raise Exception("Invalid data in info/refs")
+            line = data[i:i + plen]
+            i += plen
+            if not read_headers:
+                if plen == 0:
+                    read_headers = True
+                continue
+            if not read_advertisement:
+                read_advertisement = True
+                continue
+            if plen == 0:
+                # The terminating null
+                continue
+            line = line.strip()
+            revision, ref = line.split()
+            ret[ref] = revision
+        return ret
+
+    def getGitUrl(self, project):
+        url = 'ssh://%s@%s:%s/%s' % (self.user, self.server, self.port,
+                                     project.name)
+        return url
+
+    def _getGitwebUrl(self, project, sha=None):
+        url = '%s/gitweb?p=%s.git' % (self.baseurl, project)
+        if sha:
+            url += ';a=commitdiff;h=' + sha
+        return url
+
+    def onLoad(self):
+        self.log.debug("Starting Gerrit Connection/Watchers")
+        self._start_watcher_thread()
+        self._start_event_connector()
+
+    def onStop(self):
+        self.log.debug("Stopping Gerrit Connection/Watchers")
+        self._stop_watcher_thread()
+        self._stop_event_connector()
+
+    def _stop_watcher_thread(self):
+        if self.watcher_thread:
+            self.watcher_thread.stop()
+            self.watcher_thread.join()
+
+    def _start_watcher_thread(self):
+        self.watcher_thread = GerritWatcher(
+            self,
+            self.user,
+            self.server,
+            self.port,
+            keyfile=self.keyfile,
+            keepalive=self.keepalive)
+        self.watcher_thread.start()
+
+    def _stop_event_connector(self):
+        if self.gerrit_event_connector:
+            self.gerrit_event_connector.stop()
+            self.gerrit_event_connector.join()
+
+    def _start_event_connector(self):
+        self.gerrit_event_connector = GerritEventConnector(self)
+        self.gerrit_event_connector.start()
+
+
+def getSchema():
+    gerrit_connection = v.Any(str, v.Schema({}, extra=True))
+    return gerrit_connection
diff --git a/zuul/reporter/gerrit.py b/zuul/driver/gerrit/gerritreporter.py
similarity index 90%
rename from zuul/reporter/gerrit.py
rename to zuul/driver/gerrit/gerritreporter.py
index d9c671d..d132d65 100644
--- a/zuul/reporter/gerrit.py
+++ b/zuul/driver/gerrit/gerritreporter.py
@@ -30,13 +30,13 @@
         message = self._formatItemReport(pipeline, item)
 
         self.log.debug("Report change %s, params %s, message: %s" %
-                       (item.change, self.reporter_config, message))
+                       (item.change, self.config, message))
         changeid = '%s,%s' % (item.change.number, item.change.patchset)
         item.change._ref_sha = source.getRefSha(
             item.change.project.name, 'refs/heads/' + item.change.branch)
 
         return self.connection.review(item.change.project.name, changeid,
-                                      message, self.reporter_config)
+                                      message, self.config)
 
     def getSubmitAllowNeeds(self):
         """Get a list of code review labels that are allowed to be
@@ -44,7 +44,7 @@
         to this queue.  In other words, the list of review labels
         this reporter itself is likely to set before submitting.
         """
-        return self.reporter_config
+        return self.config
 
 
 def getSchema():
diff --git a/zuul/driver/gerrit/gerritsource.py b/zuul/driver/gerrit/gerritsource.py
new file mode 100644
index 0000000..c5e46b1
--- /dev/null
+++ b/zuul/driver/gerrit/gerritsource.py
@@ -0,0 +1,51 @@
+# Copyright 2012 Hewlett-Packard Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+from zuul.source import BaseSource
+
+
+class GerritSource(BaseSource):
+    name = 'gerrit'
+    log = logging.getLogger("zuul.source.Gerrit")
+
+    def getRefSha(self, project, ref):
+        return self.connection.getRefSha(project, ref)
+
+    def isMerged(self, change, head=None):
+        return self.connection.isMerged(change, head)
+
+    def canMerge(self, change, allow_needs):
+        return self.connection.canMerge(change, allow_needs)
+
+    def postConfig(self):
+        pass
+
+    def getChange(self, event, refresh=False):
+        return self.connection.getChange(event, refresh)
+
+    def getProject(self, name):
+        return self.connection.getProject(name)
+
+    def getProjectOpenChanges(self, project):
+        return self.connection.getProjectOpenChanges(project)
+
+    def getProjectBranches(self, project):
+        return self.connection.getProjectBranches(project)
+
+    def getGitUrl(self, project):
+        return self.connection.getGitUrl(project)
+
+    def _getGitwebUrl(self, project, sha=None):
+        return self.connection._getGitwebUrl(project, sha)
diff --git a/zuul/trigger/gerrit.py b/zuul/driver/gerrit/gerrittrigger.py
similarity index 100%
rename from zuul/trigger/gerrit.py
rename to zuul/driver/gerrit/gerrittrigger.py
diff --git a/zuul/driver/git/__init__.py b/zuul/driver/git/__init__.py
new file mode 100644
index 0000000..abedf6a
--- /dev/null
+++ b/zuul/driver/git/__init__.py
@@ -0,0 +1,27 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from zuul.driver import Driver, ConnectionInterface, SourceInterface
+import gitconnection
+import gitsource
+
+
+class GitDriver(Driver, ConnectionInterface, SourceInterface):
+    name = 'git'
+
+    def getConnection(self, name, config):
+        return gitconnection.GitConnection(self, name, config)
+
+    def getSource(self, connection):
+        return gitsource.GitSource(self, connection)
diff --git a/zuul/driver/git/gitconnection.py b/zuul/driver/git/gitconnection.py
new file mode 100644
index 0000000..e72cc77
--- /dev/null
+++ b/zuul/driver/git/gitconnection.py
@@ -0,0 +1,54 @@
+# Copyright 2011 OpenStack, LLC.
+# Copyright 2012 Hewlett-Packard Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+import voluptuous as v
+
+from zuul.connection import BaseConnection
+from zuul.model import Project
+
+
+class GitConnection(BaseConnection):
+    driver_name = 'git'
+    log = logging.getLogger("connection.git")
+
+    def __init__(self, driver, connection_name, connection_config):
+        super(GitConnection, self).__init__(driver, connection_name,
+                                            connection_config)
+        if 'baseurl' not in self.connection_config:
+            raise Exception('baseurl is required for git connections in '
+                            '%s' % self.connection_name)
+
+        self.baseurl = self.connection_config.get('baseurl')
+        self.projects = {}
+
+    def getProject(self, name):
+        if name not in self.projects:
+            self.projects[name] = Project(name, self.connection_name)
+        return self.projects[name]
+
+    def getProjectBranches(self, project):
+        # TODO(jeblair): implement; this will need to handle local or
+        # remote git urls.
+        raise NotImplemented()
+
+    def getGitUrl(self, project):
+        url = '%s/%s' % (self.baseurl, project.name)
+        return url
+
+
+def getSchema():
+    git_connection = v.Any(str, v.Schema({}, extra=True))
+    return git_connection
diff --git a/zuul/driver/git/gitsource.py b/zuul/driver/git/gitsource.py
new file mode 100644
index 0000000..bbe799a
--- /dev/null
+++ b/zuul/driver/git/gitsource.py
@@ -0,0 +1,45 @@
+# Copyright 2012 Hewlett-Packard Development Company, L.P.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+from zuul.source import BaseSource
+
+
+class GitSource(BaseSource):
+    name = 'git'
+    log = logging.getLogger("zuul.source.Git")
+
+    def getRefSha(self, project, ref):
+        raise NotImplemented()
+
+    def isMerged(self, change, head=None):
+        raise NotImplemented()
+
+    def canMerge(self, change, allow_needs):
+        raise NotImplemented()
+
+    def getChange(self, event, refresh=False):
+        raise NotImplemented()
+
+    def getProject(self, name):
+        return self.connection.getProject(name)
+
+    def getProjectBranches(self, project):
+        return self.connection.getProjectBranches(project)
+
+    def getGitUrl(self, project):
+        return self.connection.getGitUrl(project)
+
+    def getProjectOpenChanges(self, project):
+        raise NotImplemented()
diff --git a/zuul/driver/smtp/__init__.py b/zuul/driver/smtp/__init__.py
new file mode 100644
index 0000000..0745644
--- /dev/null
+++ b/zuul/driver/smtp/__init__.py
@@ -0,0 +1,30 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from zuul.driver import Driver, ConnectionInterface, ReporterInterface
+import smtpconnection
+import smtpreporter
+
+
+class SMTPDriver(Driver, ConnectionInterface, ReporterInterface):
+    name = 'smtp'
+
+    def getConnection(self, name, config):
+        return smtpconnection.SMTPConnection(self, name, config)
+
+    def getReporter(self, connection, config=None):
+        return smtpreporter.SMTPReporter(self, connection, config)
+
+    def getReporterSchema(self):
+        return smtpreporter.getSchema()
diff --git a/zuul/connection/smtp.py b/zuul/driver/smtp/smtpconnection.py
similarity index 92%
rename from zuul/connection/smtp.py
rename to zuul/driver/smtp/smtpconnection.py
index 125cb15..6338cd5 100644
--- a/zuul/connection/smtp.py
+++ b/zuul/driver/smtp/smtpconnection.py
@@ -25,9 +25,8 @@
     driver_name = 'smtp'
     log = logging.getLogger("zuul.SMTPConnection")
 
-    def __init__(self, connection_name, connection_config):
-
-        super(SMTPConnection, self).__init__(connection_name,
+    def __init__(self, driver, connection_name, connection_config):
+        super(SMTPConnection, self).__init__(driver, connection_name,
                                              connection_config)
 
         self.smtp_server = self.connection_config.get(
diff --git a/zuul/reporter/smtp.py b/zuul/driver/smtp/smtpreporter.py
similarity index 77%
rename from zuul/reporter/smtp.py
rename to zuul/driver/smtp/smtpreporter.py
index 3935098..dd618ef 100644
--- a/zuul/reporter/smtp.py
+++ b/zuul/driver/smtp/smtpreporter.py
@@ -29,15 +29,15 @@
         message = self._formatItemReport(pipeline, item)
 
         self.log.debug("Report change %s, params %s, message: %s" %
-                       (item.change, self.reporter_config, message))
+                       (item.change, self.config, message))
 
-        from_email = self.reporter_config['from'] \
-            if 'from' in self.reporter_config else None
-        to_email = self.reporter_config['to'] \
-            if 'to' in self.reporter_config else None
+        from_email = self.config['from'] \
+            if 'from' in self.config else None
+        to_email = self.config['to'] \
+            if 'to' in self.config else None
 
-        if 'subject' in self.reporter_config:
-            subject = self.reporter_config['subject'].format(
+        if 'subject' in self.config:
+            subject = self.config['subject'].format(
                 change=item.change)
         else:
             subject = "Report for change %s" % item.change
diff --git a/zuul/driver/timer/__init__.py b/zuul/driver/timer/__init__.py
new file mode 100644
index 0000000..3ce0b8d
--- /dev/null
+++ b/zuul/driver/timer/__init__.py
@@ -0,0 +1,94 @@
+# Copyright 2012 Hewlett-Packard Development Company, L.P.
+# Copyright 2013 OpenStack Foundation
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+from apscheduler.schedulers.background import BackgroundScheduler
+from apscheduler.triggers.cron import CronTrigger
+
+from zuul.driver import Driver, TriggerInterface
+from zuul.model import TriggerEvent
+import timertrigger
+
+
+class TimerDriver(Driver, TriggerInterface):
+    name = 'timer'
+    log = logging.getLogger("zuul.TimerDriver")
+
+    def __init__(self):
+        self.apsched = BackgroundScheduler()
+        self.apsched.start()
+        self.tenant_jobs = {}
+
+    def registerScheduler(self, scheduler):
+        self.sched = scheduler
+
+    def reconfigure(self, tenant):
+        self._removeJobs(tenant)
+        self._addJobs(tenant)
+
+    def _removeJobs(self, tenant):
+        jobs = self.tenant_jobs.get(tenant.name, [])
+        for job in jobs:
+            job.remove()
+
+    def _addJobs(self, tenant):
+        jobs = []
+        self.tenant_jobs[tenant.name] = jobs
+        for pipeline in tenant.layout.pipelines.values():
+            for ef in pipeline.manager.event_filters:
+                if not isinstance(ef.trigger, timertrigger.TimerTrigger):
+                    continue
+                for timespec in ef.timespecs:
+                    parts = timespec.split()
+                    if len(parts) < 5 or len(parts) > 6:
+                        self.log.error(
+                            "Unable to parse time value '%s' "
+                            "defined in pipeline %s" % (
+                                timespec,
+                                pipeline.name))
+                        continue
+                    minute, hour, dom, month, dow = parts[:5]
+                    if len(parts) > 5:
+                        second = parts[5]
+                    else:
+                        second = None
+                    trigger = CronTrigger(day=dom, day_of_week=dow, hour=hour,
+                                          minute=minute, second=second)
+
+                    job = self.apsched.add_job(
+                        self._onTrigger, trigger=trigger,
+                        args=(tenant, pipeline.name, timespec,))
+                    jobs.append(job)
+
+    def _onTrigger(self, tenant, pipeline_name, timespec):
+        for project_name in tenant.layout.project_configs.keys():
+            event = TriggerEvent()
+            event.type = 'timer'
+            event.timespec = timespec
+            event.forced_pipeline = pipeline_name
+            event.project_name = project_name
+            self.log.debug("Adding event %s" % event)
+            self.sched.addEvent(event)
+
+    def stop(self):
+        self.apsched.shutdown()
+
+    def getTrigger(self, connection_name, config=None):
+        return timertrigger.TimerTrigger(self, config)
+
+    def getTriggerSchema(self):
+        return timertrigger.getSchema()
diff --git a/zuul/driver/timer/timertrigger.py b/zuul/driver/timer/timertrigger.py
new file mode 100644
index 0000000..b0f282c
--- /dev/null
+++ b/zuul/driver/timer/timertrigger.py
@@ -0,0 +1,46 @@
+# Copyright 2012 Hewlett-Packard Development Company, L.P.
+# Copyright 2013 OpenStack Foundation
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import voluptuous as v
+
+from zuul.model import EventFilter
+from zuul.trigger import BaseTrigger
+
+
+class TimerTrigger(BaseTrigger):
+    name = 'timer'
+
+    def getEventFilters(self, trigger_conf):
+        def toList(item):
+            if not item:
+                return []
+            if isinstance(item, list):
+                return item
+            return [item]
+
+        efilters = []
+        for trigger in toList(trigger_conf):
+            f = EventFilter(trigger=self,
+                            types=['timer'],
+                            timespecs=toList(trigger['time']))
+
+            efilters.append(f)
+
+        return efilters
+
+
+def getSchema():
+    timer_trigger = {v.Required('time'): str}
+    return timer_trigger
diff --git a/zuul/driver/zuul/__init__.py b/zuul/driver/zuul/__init__.py
new file mode 100644
index 0000000..1bc0ee9
--- /dev/null
+++ b/zuul/driver/zuul/__init__.py
@@ -0,0 +1,112 @@
+# Copyright 2016 Red Hat, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+from zuul.driver import Driver, TriggerInterface
+from zuul.model import TriggerEvent
+
+import zuultrigger
+
+PARENT_CHANGE_ENQUEUED = 'parent-change-enqueued'
+PROJECT_CHANGE_MERGED = 'project-change-merged'
+
+
+class ZuulDriver(Driver, TriggerInterface):
+    name = 'zuul'
+    log = logging.getLogger("zuul.ZuulTrigger")
+
+    def __init__(self):
+        self.tenant_events = {}
+
+    def registerScheduler(self, scheduler):
+        self.sched = scheduler
+
+    def reconfigure(self, tenant):
+        events = set()
+        self.tenant_events[tenant.name] = events
+        for pipeline in tenant.layout.pipelines.values():
+            for ef in pipeline.manager.event_filters:
+                if not isinstance(ef.trigger, zuultrigger.ZuulTrigger):
+                    continue
+                if PARENT_CHANGE_ENQUEUED in ef._types:
+                    events.add(PARENT_CHANGE_ENQUEUED)
+                elif PROJECT_CHANGE_MERGED in ef._types:
+                    events.add(PROJECT_CHANGE_MERGED)
+
+    def onChangeMerged(self, tenant, change, source):
+        # Called each time zuul merges a change
+        if PROJECT_CHANGE_MERGED in self.tenant_events[tenant.name]:
+            try:
+                self._createProjectChangeMergedEvents(change, source)
+            except Exception:
+                self.log.exception(
+                    "Unable to create project-change-merged events for "
+                    "%s" % (change,))
+
+    def onChangeEnqueued(self, tenant, change, pipeline):
+        self.log.debug("onChangeEnqueued %s", self.tenant_events[tenant.name])
+        # Called each time a change is enqueued in a pipeline
+        if PARENT_CHANGE_ENQUEUED in self.tenant_events[tenant.name]:
+            try:
+                self._createParentChangeEnqueuedEvents(change, pipeline)
+            except Exception:
+                self.log.exception(
+                    "Unable to create parent-change-enqueued events for "
+                    "%s in %s" % (change, pipeline))
+
+    def _createProjectChangeMergedEvents(self, change, source):
+        changes = source.getProjectOpenChanges(
+            change.project)
+        for open_change in changes:
+            self._createProjectChangeMergedEvent(open_change)
+
+    def _createProjectChangeMergedEvent(self, change):
+        event = TriggerEvent()
+        event.type = PROJECT_CHANGE_MERGED
+        event.trigger_name = self.name
+        event.project_name = change.project.name
+        event.change_number = change.number
+        event.branch = change.branch
+        event.change_url = change.url
+        event.patch_number = change.patchset
+        event.refspec = change.refspec
+        self.sched.addEvent(event)
+
+    def _createParentChangeEnqueuedEvents(self, change, pipeline):
+        self.log.debug("Checking for changes needing %s:" % change)
+        if not hasattr(change, 'needed_by_changes'):
+            self.log.debug("  Changeish does not support dependencies")
+            return
+        for needs in change.needed_by_changes:
+            self._createParentChangeEnqueuedEvent(needs, pipeline)
+
+    def _createParentChangeEnqueuedEvent(self, change, pipeline):
+        event = TriggerEvent()
+        event.type = PARENT_CHANGE_ENQUEUED
+        event.trigger_name = self.name
+        event.pipeline_name = pipeline.name
+        event.project_name = change.project.name
+        event.change_number = change.number
+        event.branch = change.branch
+        event.change_url = change.url
+        event.patch_number = change.patchset
+        event.refspec = change.refspec
+        self.sched.addEvent(event)
+
+    def getTrigger(self, connection_name, config=None):
+        return zuultrigger.ZuulTrigger(self, config)
+
+    def getTriggerSchema(self):
+        return zuultrigger.getSchema()
diff --git a/zuul/driver/zuul/zuultrigger.py b/zuul/driver/zuul/zuultrigger.py
new file mode 100644
index 0000000..bb7c04e
--- /dev/null
+++ b/zuul/driver/zuul/zuultrigger.py
@@ -0,0 +1,77 @@
+# Copyright 2012-2014 Hewlett-Packard Development Company, L.P.
+# Copyright 2013 OpenStack Foundation
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+import voluptuous as v
+from zuul.model import EventFilter
+from zuul.trigger import BaseTrigger
+
+
+class ZuulTrigger(BaseTrigger):
+    name = 'zuul'
+    log = logging.getLogger("zuul.ZuulTrigger")
+
+    def __init__(self, connection, config=None):
+        super(ZuulTrigger, self).__init__(connection, config)
+        self._handle_parent_change_enqueued_events = False
+        self._handle_project_change_merged_events = False
+
+    def getEventFilters(self, trigger_conf):
+        def toList(item):
+            if not item:
+                return []
+            if isinstance(item, list):
+                return item
+            return [item]
+
+        efilters = []
+        for trigger in toList(trigger_conf):
+            f = EventFilter(
+                trigger=self,
+                types=toList(trigger['event']),
+                pipelines=toList(trigger.get('pipeline')),
+                required_approvals=(
+                    toList(trigger.get('require-approval'))
+                ),
+                reject_approvals=toList(
+                    trigger.get('reject-approval')
+                ),
+            )
+            efilters.append(f)
+
+        return efilters
+
+
+def getSchema():
+    def toList(x):
+        return v.Any([x], x)
+
+    approval = v.Schema({'username': str,
+                         'email-filter': str,
+                         'email': str,
+                         'older-than': str,
+                         'newer-than': str,
+                         }, extra=True)
+
+    zuul_trigger = {
+        v.Required('event'):
+        toList(v.Any('parent-change-enqueued',
+                     'project-change-merged')),
+        'pipeline': toList(str),
+        'require-approval': toList(approval),
+        'reject-approval': toList(approval),
+    }
+
+    return zuul_trigger
diff --git a/zuul/launcher/ansiblelaunchserver.py b/zuul/launcher/ansiblelaunchserver.py
index c70302b..875cf2b 100644
--- a/zuul/launcher/ansiblelaunchserver.py
+++ b/zuul/launcher/ansiblelaunchserver.py
@@ -12,6 +12,13 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
+############################################################################
+# NOTE(jhesketh): This file has been superceeded by zuul/launcher/server.py.
+# It is kept here to make merging master back into v3 easier. Once closer
+# to completion it can be removed.
+############################################################################
+
+
 import json
 import logging
 import os
diff --git a/zuul/launcher/gearman.py b/zuul/launcher/client.py
similarity index 73%
rename from zuul/launcher/gearman.py
rename to zuul/launcher/client.py
index 2840ba6..6abd6f4 100644
--- a/zuul/launcher/gearman.py
+++ b/zuul/launcher/client.py
@@ -12,8 +12,8 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
+import copy
 import gear
-import inspect
 import json
 import logging
 import os
@@ -26,6 +26,45 @@
 from zuul.model import Build
 
 
+def make_merger_item(item):
+    # Create a dictionary with all info about the item needed by
+    # the merger.
+    number = None
+    patchset = None
+    oldrev = None
+    newrev = None
+    refspec = None
+    if hasattr(item.change, 'number'):
+        number = item.change.number
+        patchset = item.change.patchset
+        refspec = item.change.refspec
+        branch = item.change.branch
+    elif hasattr(item.change, 'newrev'):
+        oldrev = item.change.oldrev
+        newrev = item.change.newrev
+        branch = item.change.ref
+    else:
+        oldrev = None
+        newrev = None
+        branch = None
+    connection_name = item.pipeline.source.connection.connection_name
+    project = item.change.project.name
+
+    return dict(project=project,
+                url=item.pipeline.source.getGitUrl(
+                    item.change.project),
+                connection_name=connection_name,
+                merge_mode=item.current_build_set.getMergeMode(project),
+                refspec=refspec,
+                branch=branch,
+                ref=item.current_build_set.ref,
+                number=number,
+                patchset=patchset,
+                oldrev=oldrev,
+                newrev=newrev,
+                )
+
+
 class GearmanCleanup(threading.Thread):
     """ A thread that checks to see if outstanding builds have
     completed without reporting back. """
@@ -64,7 +103,7 @@
 
 class ZuulGearmanClient(gear.Client):
     def __init__(self, zuul_gearman):
-        super(ZuulGearmanClient, self).__init__()
+        super(ZuulGearmanClient, self).__init__('Zuul Launch Client')
         self.__zuul_gearman = zuul_gearman
 
     def handleWorkComplete(self, packet):
@@ -105,52 +144,9 @@
                 if build.__gearman_job.handle == handle:
                     self.__zuul_gearman.onUnknownJob(job)
 
-    def waitForGearmanToSettle(self):
-        # If we're running the internal gearman server, it's possible
-        # that after a restart or reload, we may be immediately ready
-        # to run jobs but all the gearman workers may not have
-        # registered yet.  Give them a sporting chance to show up
-        # before we start declaring jobs lost because we don't have
-        # gearman functions registered for them.
 
-        # Spend up to 30 seconds after we connect to the gearman
-        # server waiting for the set of defined jobs to become
-        # consistent over a sliding 5 second window.
-
-        self.log.info("Waiting for connection to internal Gearman server")
-        self.waitForServer()
-        self.log.info("Waiting for gearman function set to settle")
-        start = time.time()
-        last_change = start
-        all_functions = set()
-        while time.time() - start < 30:
-            now = time.time()
-            last_functions = set()
-            for connection in self.active_connections:
-                try:
-                    req = gear.StatusAdminRequest()
-                    connection.sendAdminRequest(req, timeout=300)
-                except Exception:
-                    self.log.exception("Exception while checking functions")
-                    continue
-                for line in req.response.split('\n'):
-                    parts = [x.strip() for x in line.split()]
-                    if not parts or parts[0] == '.':
-                        continue
-                    last_functions.add(parts[0])
-            if last_functions != all_functions:
-                last_change = now
-                all_functions.update(last_functions)
-            else:
-                if now - last_change > 5:
-                    self.log.info("Gearman function set has settled")
-                    break
-            time.sleep(1)
-        self.log.info("Done waiting for Gearman server")
-
-
-class Gearman(object):
-    log = logging.getLogger("zuul.Gearman")
+class LaunchClient(object):
+    log = logging.getLogger("zuul.LaunchClient")
     negative_function_cache_ttl = 5
 
     def __init__(self, config, sched, swift):
@@ -165,19 +161,10 @@
             port = config.get('gearman', 'port')
         else:
             port = 4730
-        if config.has_option('gearman', 'check_job_registration'):
-            self.job_registration = config.getboolean(
-                'gearman', 'check_job_registration')
-        else:
-            self.job_registration = True
 
         self.gearman = ZuulGearmanClient(self)
         self.gearman.addServer(server, port)
 
-        if (config.has_option('gearman_server', 'start') and
-            config.getboolean('gearman_server', 'start')):
-            self.gearman.waitForGearmanToSettle()
-
         self.cleanup_thread = GearmanCleanup(self)
         self.cleanup_thread.start()
         self.function_cache = set()
@@ -230,20 +217,7 @@
         # NOTE(jhesketh): The params need to stay in a key=value data pair
         # as workers cannot necessarily handle lists.
 
-        if callable(job.parameter_function):
-            pargs = inspect.getargspec(job.parameter_function)
-            if len(pargs.args) == 2:
-                job.parameter_function(item, params)
-            else:
-                job.parameter_function(item, job, params)
-            self.log.debug("Custom parameter function used for job %s, "
-                           "change: %s, params: %s" % (job, item.change,
-                                                       params))
-
-        # NOTE(mmedvede): Swift parameter creation should remain after the call
-        # to job.parameter_function to make it possible to update LOG_PATH for
-        # swift upload url using parameter_function mechanism.
-        if job.swift and self.swift.connection:
+        if 'swift' in job.auth and self.swift.connection:
 
             for name, s in job.swift.items():
                 swift_instructions = {}
@@ -276,12 +250,27 @@
     def launch(self, job, item, pipeline, dependent_items=[]):
         uuid = str(uuid4().hex)
         self.log.info(
-            "Launch job %s (uuid: %s) for change %s with dependent "
-            "changes %s" % (
-                job, uuid, item.change,
+            "Launch job %s (uuid: %s) on nodes %s for change %s "
+            "with dependent changes %s" % (
+                job, uuid,
+                item.current_build_set.getJobNodeSet(job.name),
+                item.change,
                 [x.change for x in dependent_items]))
         dependent_items = dependent_items[:]
         dependent_items.reverse()
+        # TODOv3(jeblair): This ansible vars data structure will
+        # replace the environment variables below.
+        zuul_params = dict(uuid=uuid,
+                           pipeline=pipeline.name,
+                           job=job.name,
+                           project=item.change.project.name)
+        if hasattr(item.change, 'branch'):
+            zuul_params['branch'] = item.change.branch
+        if hasattr(item.change, 'number'):
+            zuul_params['change'] = item.change.number
+        if hasattr(item.change, 'patchset'):
+            zuul_params['patchset'] = item.change.patchset
+        # Legacy environment variables
         params = dict(ZUUL_UUID=uuid,
                       ZUUL_PROJECT=item.change.project.name)
         params['ZUUL_PIPELINE'] = pipeline.name
@@ -313,7 +302,7 @@
             params['ZUUL_REF'] = item.change.ref
             params['ZUUL_COMMIT'] = item.change.newrev
 
-        # The destination_path is a unqiue path for this build request
+        # The destination_path is a unique path for this build request
         # and generally where the logs are expected to be placed
         destination_path = os.path.join(item.change.getBasePath(),
                                         pipeline.name, job.name, uuid[:7])
@@ -344,10 +333,44 @@
         # ZUUL_OLDREV
         # ZUUL_NEWREV
 
-        if 'ZUUL_NODE' in params:
-            name = "build:%s:%s" % (job.name, params['ZUUL_NODE'])
-        else:
-            name = "build:%s" % job.name
+        all_items = dependent_items + [item]
+        merger_items = map(make_merger_item, all_items)
+
+        params['job'] = job.name
+        params['timeout'] = job.timeout
+        params['items'] = merger_items
+        params['projects'] = []
+
+        if job.name != 'noop':
+            params['playbooks'] = [x.toDict() for x in job.run]
+            params['pre_playbooks'] = [x.toDict() for x in job.pre_run]
+            params['post_playbooks'] = [x.toDict() for x in job.post_run]
+            params['roles'] = [x.toDict() for x in job.roles]
+
+        nodes = []
+        for node in item.current_build_set.getJobNodeSet(job.name).getNodes():
+            nodes.append(dict(name=node.name, image=node.image,
+                              public_ipv6=node.public_ipv6,
+                              public_ipv4=node.public_ipv4))
+        params['nodes'] = nodes
+        params['vars'] = copy.deepcopy(job.variables)
+        params['vars']['zuul'] = zuul_params
+        projects = set()
+        if job.repos:
+            for repo in job.repos:
+                project = item.pipeline.source.getProject(repo)
+                params['projects'].append(
+                    dict(name=repo,
+                         url=item.pipeline.source.getGitUrl(project)))
+                projects.add(project)
+        for item in all_items:
+            if item.change.project not in projects:
+                params['projects'].append(
+                    dict(name=item.change.project.name,
+                         url=item.pipeline.source.getGitUrl(
+                             item.change.project)))
+                projects.add(item.change.project)
+
         build = Build(job, uuid)
         build.parameters = params
 
@@ -355,18 +378,12 @@
             self.sched.onBuildCompleted(build, 'SUCCESS')
             return build
 
-        gearman_job = gear.Job(name, json.dumps(params),
+        gearman_job = gear.Job('launcher:launch', json.dumps(params),
                                unique=uuid)
         build.__gearman_job = gearman_job
+        build.__gearman_manager = None
         self.builds[uuid] = build
 
-        if self.job_registration and not self.isJobRegistered(
-                gearman_job.name):
-            self.log.error("Job %s is not registered with Gearman" %
-                           gearman_job)
-            self.onBuildCompleted(gearman_job, 'NOT_REGISTERED')
-            return build
-
         # NOTE(pabelanger): Rather then looping forever, check to see if job
         # has passed attempts limit.
         if item.current_build_set.getTries(job.name) > job.attempts:
@@ -400,6 +417,7 @@
         return build
 
     def cancel(self, build):
+        # Returns whether a running build was canceled
         self.log.info("Cancel build %s for job %s" % (build, build.job))
 
         build.canceled = True
@@ -407,29 +425,30 @@
             job = build.__gearman_job  # noqa
         except AttributeError:
             self.log.debug("Build %s has no associated gearman job" % build)
-            return
+            return False
 
-        if build.number is not None:
+        # TODOv3(jeblair): make a nicer way of recording build start.
+        if build.url is not None:
             self.log.debug("Build %s has already started" % build)
             self.cancelRunningBuild(build)
             self.log.debug("Canceled running build %s" % build)
-            return
+            return True
         else:
             self.log.debug("Build %s has not started yet" % build)
 
         self.log.debug("Looking for build %s in queue" % build)
         if self.cancelJobInQueue(build):
             self.log.debug("Removed build %s from queue" % build)
-            return
+            return False
 
         time.sleep(1)
 
         self.log.debug("Still unable to find build %s to cancel" % build)
-        if build.number:
+        if build.url:
             self.log.debug("Build %s has just started" % build)
             self.log.debug("Canceled running build %s" % build)
             self.cancelRunningBuild(build)
-            return
+            return True
         self.log.debug("Unable to cancel build %s" % build)
 
     def onBuildCompleted(self, job, result=None):
@@ -442,19 +461,18 @@
             data = getJobData(job)
             build.node_labels = data.get('node_labels', [])
             build.node_name = data.get('node_name')
-            if not build.canceled:
-                if result is None:
-                    result = data.get('result')
-                if result is None:
-                    build.retry = True
-                self.log.info("Build %s complete, result %s" %
-                              (job, result))
-                self.sched.onBuildCompleted(build, result)
+            if result is None:
+                result = data.get('result')
+            if result is None:
+                build.retry = True
+            self.log.info("Build %s complete, result %s" %
+                          (job, result))
+            self.sched.onBuildCompleted(build, result)
             # The test suite expects the build to be removed from the
             # internal dict after it's added to the report queue.
             del self.builds[job.unique]
         else:
-            if not job.name.startswith("stop:"):
+            if not job.name.startswith("launcher:stop:"):
                 self.log.error("Unable to find build %s" % job.unique)
 
     def onWorkStatus(self, job):
@@ -462,14 +480,14 @@
         self.log.debug("Build %s update %s" % (job, data))
         build = self.builds.get(job.unique)
         if build:
+            started = (build.url is not None)
             # Allow URL to be updated
-            build.url = data.get('url') or build.url
+            build.url = data.get('url', build.url)
             # Update information about worker
             build.worker.updateFromData(data)
 
-            if build.number is None:
+            if not started:
                 self.log.info("Build %s started" % job)
-                build.number = data.get('number')
                 build.__gearman_manager = data.get('manager')
                 self.sched.onBuildStarted(build)
         else:
@@ -499,10 +517,12 @@
         return False
 
     def cancelRunningBuild(self, build):
+        if not build.__gearman_manager:
+            self.log.error("Build %s has no manager while canceling" %
+                           (build,))
         stop_uuid = str(uuid4().hex)
-        data = dict(name=build.job.name,
-                    number=build.number)
-        stop_job = gear.Job("stop:%s" % build.__gearman_manager,
+        data = dict(uuid=build.__gearman_job.unique)
+        stop_job = gear.Job("launcher:stop:%s" % build.__gearman_manager,
                             json.dumps(data), unique=stop_uuid)
         self.meta_jobs[stop_uuid] = stop_job
         self.log.debug("Submitting stop job: %s", stop_job)
@@ -510,28 +530,6 @@
                                timeout=300)
         return True
 
-    def setBuildDescription(self, build, desc):
-        try:
-            name = "set_description:%s" % build.__gearman_manager
-        except AttributeError:
-            # We haven't yet received the first data packet that tells
-            # us where the job is running.
-            return False
-
-        if self.job_registration and not self.isJobRegistered(name):
-            return False
-
-        desc_uuid = str(uuid4().hex)
-        data = dict(name=build.job.name,
-                    number=build.number,
-                    html_description=desc)
-        desc_job = gear.Job(name, json.dumps(data), unique=desc_uuid)
-        self.meta_jobs[desc_uuid] = desc_job
-        self.log.debug("Submitting describe job: %s", desc_job)
-        self.gearman.submitJob(desc_job, precedence=gear.PRECEDENCE_LOW,
-                               timeout=300)
-        return True
-
     def lookForLostBuilds(self):
         self.log.debug("Looking for lost builds")
         for build in self.builds.values():
diff --git a/zuul/launcher/server.py b/zuul/launcher/server.py
new file mode 100644
index 0000000..1b8d2c6
--- /dev/null
+++ b/zuul/launcher/server.py
@@ -0,0 +1,961 @@
+# Copyright 2014 OpenStack Foundation
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import collections
+import json
+import logging
+import os
+import shutil
+import signal
+import socket
+import subprocess
+import tempfile
+import threading
+import time
+import traceback
+import yaml
+
+import gear
+import git
+
+import zuul.merger.merger
+import zuul.ansible.action
+import zuul.ansible.callback
+import zuul.ansible.library
+from zuul.lib import commandsocket
+
+COMMANDS = ['stop', 'pause', 'unpause', 'graceful', 'verbose',
+            'unverbose']
+
+
+class Watchdog(object):
+    def __init__(self, timeout, function, args):
+        self.timeout = timeout
+        self.function = function
+        self.args = args
+        self.thread = threading.Thread(target=self._run)
+        self.thread.daemon = True
+        self.timed_out = None
+
+    def _run(self):
+        while self._running and time.time() < self.end:
+            time.sleep(10)
+        if self._running:
+            self.timed_out = True
+            self.function(*self.args)
+        self.timed_out = False
+
+    def start(self):
+        self._running = True
+        self.end = time.time() + self.timeout
+        self.thread.start()
+
+    def stop(self):
+        self._running = False
+
+# TODOv3(mordred): put git repos in a hierarchy that includes source
+# hostname, eg: git.openstack.org/openstack/nova.  Also, configure
+# sources to have an alias, so that the review.openstack.org source
+# repos end up in git.openstack.org.
+
+
+class JobDirPlaybook(object):
+    def __init__(self, root):
+        self.root = root
+        self.trusted = None
+        self.path = None
+
+
+class JobDir(object):
+    def __init__(self, root=None, keep=False):
+        # root
+        #   ansible
+        #     trusted.cfg
+        #     untrusted.cfg
+        #   work
+        #     src
+        #     logs
+        self.keep = keep
+        self.root = tempfile.mkdtemp(dir=root)
+        # Work
+        self.work_root = os.path.join(self.root, 'work')
+        os.makedirs(self.work_root)
+        self.src_root = os.path.join(self.work_root, 'src')
+        os.makedirs(self.src_root)
+        self.log_root = os.path.join(self.work_root, 'logs')
+        os.makedirs(self.log_root)
+        # Ansible
+        self.ansible_root = os.path.join(self.root, 'ansible')
+        os.makedirs(self.ansible_root)
+        self.known_hosts = os.path.join(self.ansible_root, 'known_hosts')
+        self.inventory = os.path.join(self.ansible_root, 'inventory')
+        self.vars = os.path.join(self.ansible_root, 'vars.yaml')
+        self.playbooks = []  # The list of candidate playbooks
+        self.playbook = None  # A pointer to the candidate we have chosen
+        self.pre_playbooks = []
+        self.post_playbooks = []
+        self.roles = []
+        self.roles_path = []
+        self.untrusted_config = os.path.join(
+            self.ansible_root, 'untrusted.cfg')
+        self.trusted_config = os.path.join(self.ansible_root, 'trusted.cfg')
+        self.ansible_log = os.path.join(self.log_root, 'ansible_log.txt')
+
+    def addPrePlaybook(self):
+        count = len(self.pre_playbooks)
+        root = os.path.join(self.ansible_root, 'pre_playbook_%i' % (count,))
+        os.makedirs(root)
+        playbook = JobDirPlaybook(root)
+        self.pre_playbooks.append(playbook)
+        return playbook
+
+    def addPostPlaybook(self):
+        count = len(self.post_playbooks)
+        root = os.path.join(self.ansible_root, 'post_playbook_%i' % (count,))
+        os.makedirs(root)
+        playbook = JobDirPlaybook(root)
+        self.post_playbooks.append(playbook)
+        return playbook
+
+    def addPlaybook(self):
+        count = len(self.playbooks)
+        root = os.path.join(self.ansible_root, 'playbook_%i' % (count,))
+        os.makedirs(root)
+        playbook = JobDirPlaybook(root)
+        self.playbooks.append(playbook)
+        return playbook
+
+    def addRole(self):
+        count = len(self.roles)
+        root = os.path.join(self.ansible_root, 'role_%i' % (count,))
+        os.makedirs(root)
+        self.roles.append(root)
+        return root
+
+    def cleanup(self):
+        if not self.keep:
+            shutil.rmtree(self.root)
+
+    def __enter__(self):
+        return self
+
+    def __exit__(self, etype, value, tb):
+        self.cleanup()
+
+
+class UpdateTask(object):
+    def __init__(self, project, url):
+        self.project = project
+        self.url = url
+        self.event = threading.Event()
+
+    def __eq__(self, other):
+        if other.project == self.project:
+            return True
+        return False
+
+    def wait(self):
+        self.event.wait()
+
+    def setComplete(self):
+        self.event.set()
+
+
+class DeduplicateQueue(object):
+    def __init__(self):
+        self.queue = collections.deque()
+        self.condition = threading.Condition()
+
+    def qsize(self):
+        return len(self.queue)
+
+    def put(self, item):
+        # Returns the original item if added, or an equivalent item if
+        # already enqueued.
+        self.condition.acquire()
+        ret = None
+        try:
+            for x in self.queue:
+                if item == x:
+                    ret = x
+            if ret is None:
+                ret = item
+                self.queue.append(item)
+                self.condition.notify()
+        finally:
+            self.condition.release()
+        return ret
+
+    def get(self):
+        self.condition.acquire()
+        try:
+            while True:
+                try:
+                    ret = self.queue.popleft()
+                    return ret
+                except IndexError:
+                    pass
+                self.condition.wait()
+        finally:
+            self.condition.release()
+
+
+class LaunchServer(object):
+    log = logging.getLogger("zuul.LaunchServer")
+
+    def __init__(self, config, connections={}, jobdir_root=None,
+                 keep_jobdir=False):
+        self.config = config
+        self.keep_jobdir = keep_jobdir
+        self.jobdir_root = jobdir_root
+        # TODOv3(mordred): make the launcher name more unique --
+        # perhaps hostname+pid.
+        self.hostname = socket.gethostname()
+        self.zuul_url = config.get('merger', 'zuul_url')
+        self.command_map = dict(
+            stop=self.stop,
+            pause=self.pause,
+            unpause=self.unpause,
+            graceful=self.graceful,
+            verbose=self.verboseOn,
+            unverbose=self.verboseOff,
+        )
+
+        if self.config.has_option('launcher', 'git_dir'):
+            self.merge_root = self.config.get('launcher', 'git_dir')
+        else:
+            self.merge_root = '/var/lib/zuul/launcher-git'
+
+        if self.config.has_option('merger', 'git_user_email'):
+            self.merge_email = self.config.get('merger', 'git_user_email')
+        else:
+            self.merge_email = None
+
+        if self.config.has_option('merger', 'git_user_name'):
+            self.merge_name = self.config.get('merger', 'git_user_name')
+        else:
+            self.merge_name = None
+
+        self.connections = connections
+        # This merger and its git repos are used to maintain
+        # up-to-date copies of all the repos that are used by jobs, as
+        # well as to support the merger:cat functon to supply
+        # configuration information to Zuul when it starts.
+        self.merger = self._getMerger(self.merge_root)
+        self.update_queue = DeduplicateQueue()
+
+        if self.config.has_option('zuul', 'state_dir'):
+            state_dir = os.path.expanduser(
+                self.config.get('zuul', 'state_dir'))
+        else:
+            state_dir = '/var/lib/zuul'
+        path = os.path.join(state_dir, 'launcher.socket')
+        self.command_socket = commandsocket.CommandSocket(path)
+        ansible_dir = os.path.join(state_dir, 'ansible')
+        self.library_dir = os.path.join(ansible_dir, 'library')
+        if not os.path.exists(self.library_dir):
+            os.makedirs(self.library_dir)
+        self.action_dir = os.path.join(ansible_dir, 'action')
+        if not os.path.exists(self.action_dir):
+            os.makedirs(self.action_dir)
+
+        self.callback_dir = os.path.join(ansible_dir, 'callback')
+        if not os.path.exists(self.callback_dir):
+            os.makedirs(self.callback_dir)
+
+        library_path = os.path.dirname(os.path.abspath(
+            zuul.ansible.library.__file__))
+        for fn in os.listdir(library_path):
+            shutil.copy(os.path.join(library_path, fn), self.library_dir)
+
+        action_path = os.path.dirname(os.path.abspath(
+            zuul.ansible.action.__file__))
+        for fn in os.listdir(action_path):
+            shutil.copy(os.path.join(action_path, fn), self.action_dir)
+
+        callback_path = os.path.dirname(os.path.abspath(
+            zuul.ansible.callback.__file__))
+        for fn in os.listdir(callback_path):
+            shutil.copy(os.path.join(callback_path, fn), self.callback_dir)
+
+        self.job_workers = {}
+
+    def _getMerger(self, root):
+        return zuul.merger.merger.Merger(root, self.connections,
+                                         self.merge_email, self.merge_name)
+
+    def start(self):
+        self._running = True
+        self._command_running = True
+        server = self.config.get('gearman', 'server')
+        if self.config.has_option('gearman', 'port'):
+            port = self.config.get('gearman', 'port')
+        else:
+            port = 4730
+        self.worker = gear.Worker('Zuul Launch Server')
+        self.worker.addServer(server, port)
+        self.log.debug("Waiting for server")
+        self.worker.waitForServer()
+        self.log.debug("Registering")
+        self.register()
+
+        self.log.debug("Starting command processor")
+        self.command_socket.start()
+        self.command_thread = threading.Thread(target=self.runCommand)
+        self.command_thread.daemon = True
+        self.command_thread.start()
+
+        self.log.debug("Starting worker")
+        self.update_thread = threading.Thread(target=self._updateLoop)
+        self.update_thread.daemon = True
+        self.update_thread.start()
+        self.thread = threading.Thread(target=self.run)
+        self.thread.daemon = True
+        self.thread.start()
+
+    def register(self):
+        self.worker.registerFunction("launcher:launch")
+        self.worker.registerFunction("launcher:stop:%s" % self.hostname)
+        self.worker.registerFunction("merger:merge")
+        self.worker.registerFunction("merger:cat")
+
+    def stop(self):
+        self.log.debug("Stopping")
+        self._running = False
+        self.worker.shutdown()
+        self._command_running = False
+        self.command_socket.stop()
+        self.update_queue.put(None)
+        self.log.debug("Stopped")
+
+    def pause(self):
+        # TODOv3: implement
+        pass
+
+    def unpause(self):
+        # TODOv3: implement
+        pass
+
+    def graceful(self):
+        # TODOv3: implement
+        pass
+
+    def verboseOn(self):
+        # TODOv3: implement
+        pass
+
+    def verboseOff(self):
+        # TODOv3: implement
+        pass
+
+    def join(self):
+        self.update_thread.join()
+        self.thread.join()
+
+    def runCommand(self):
+        while self._command_running:
+            try:
+                command = self.command_socket.get()
+                if command != '_stop':
+                    self.command_map[command]()
+            except Exception:
+                self.log.exception("Exception while processing command")
+
+    def _updateLoop(self):
+        while self._running:
+            try:
+                self._innerUpdateLoop()
+            except:
+                self.log.exception("Exception in update thread:")
+
+    def _innerUpdateLoop(self):
+        # Inside of a loop that keeps the main repositories up to date
+        task = self.update_queue.get()
+        if task is None:
+            # We are asked to stop
+            return
+        self.log.info("Updating repo %s from %s" % (task.project, task.url))
+        self.merger.updateRepo(task.project, task.url)
+        self.log.debug("Finished updating repo %s from %s" %
+                       (task.project, task.url))
+        task.setComplete()
+
+    def update(self, project, url):
+        # Update a repository in the main merger
+        task = UpdateTask(project, url)
+        task = self.update_queue.put(task)
+        return task
+
+    def run(self):
+        self.log.debug("Starting launch listener")
+        while self._running:
+            try:
+                job = self.worker.getJob()
+                try:
+                    if job.name == 'launcher:launch':
+                        self.log.debug("Got launch job: %s" % job.unique)
+                        self.launchJob(job)
+                    elif job.name.startswith('launcher:stop'):
+                        self.log.debug("Got stop job: %s" % job.unique)
+                        self.stopJob(job)
+                    elif job.name == 'merger:cat':
+                        self.log.debug("Got cat job: %s" % job.unique)
+                        self.cat(job)
+                    elif job.name == 'merger:merge':
+                        self.log.debug("Got merge job: %s" % job.unique)
+                        self.merge(job)
+                    else:
+                        self.log.error("Unable to handle job %s" % job.name)
+                        job.sendWorkFail()
+                except Exception:
+                    self.log.exception("Exception while running job")
+                    job.sendWorkException(traceback.format_exc())
+            except gear.InterruptedError:
+                pass
+            except Exception:
+                self.log.exception("Exception while getting job")
+
+    def launchJob(self, job):
+        self.job_workers[job.unique] = AnsibleJob(self, job)
+        self.job_workers[job.unique].run()
+
+    def finishJob(self, unique):
+        del(self.job_workers[unique])
+
+    def stopJob(self, job):
+        try:
+            args = json.loads(job.arguments)
+            self.log.debug("Stop job with arguments: %s" % (args,))
+            unique = args['uuid']
+            job_worker = self.job_workers.get(unique)
+            if not job_worker:
+                self.log.debug("Unable to find worker for job %s" % (unique,))
+                return
+            try:
+                job_worker.stop()
+            except Exception:
+                self.log.exception("Exception sending stop command "
+                                   "to worker:")
+        finally:
+            job.sendWorkComplete()
+
+    def cat(self, job):
+        args = json.loads(job.arguments)
+        task = self.update(args['project'], args['url'])
+        task.wait()
+        files = self.merger.getFiles(args['project'], args['url'],
+                                     args['branch'], args['files'])
+        result = dict(updated=True,
+                      files=files,
+                      zuul_url=self.zuul_url)
+        job.sendWorkComplete(json.dumps(result))
+
+    def merge(self, job):
+        args = json.loads(job.arguments)
+        ret = self.merger.mergeChanges(args['items'], args.get('files'))
+        result = dict(merged=(ret is not None),
+                      zuul_url=self.zuul_url)
+        if args.get('files'):
+            result['commit'], result['files'] = ret
+        else:
+            result['commit'] = ret
+        job.sendWorkComplete(json.dumps(result))
+
+
+class AnsibleJob(object):
+    log = logging.getLogger("zuul.AnsibleJob")
+
+    RESULT_NORMAL = 1
+    RESULT_TIMED_OUT = 2
+    RESULT_UNREACHABLE = 3
+    RESULT_ABORTED = 4
+
+    def __init__(self, launcher_server, job):
+        self.launcher_server = launcher_server
+        self.job = job
+        self.jobdir = None
+        self.proc = None
+        self.proc_lock = threading.Lock()
+        self.running = False
+        self.aborted = False
+
+        if self.launcher_server.config.has_option(
+            'launcher', 'private_key_file'):
+            self.private_key_file = self.launcher_server.config.get(
+                'launcher', 'private_key_file')
+        else:
+            self.private_key_file = '~/.ssh/id_rsa'
+
+    def run(self):
+        self.running = True
+        self.thread = threading.Thread(target=self.launch)
+        self.thread.start()
+
+    def stop(self):
+        self.aborted = True
+        self.abortRunningProc()
+        self.thread.join()
+
+    def launch(self):
+        try:
+            self.jobdir = JobDir(root=self.launcher_server.jobdir_root,
+                                 keep=self.launcher_server.keep_jobdir)
+            self._launch()
+        except Exception:
+            self.log.exception("Exception while launching job")
+            self.job.sendWorkException(traceback.format_exc())
+        finally:
+            self.running = False
+            try:
+                self.jobdir.cleanup()
+            except Exception:
+                self.log.exception("Error cleaning up jobdir:")
+            try:
+                self.launcher_server.finishJob(self.job.unique)
+            except Exception:
+                self.log.exception("Error finalizing job thread:")
+
+    def _launch(self):
+        self.log.debug("Job %s: beginning" % (self.job.unique,))
+        self.log.debug("Job %s: args: %s" % (self.job.unique,
+                                             self.job.arguments,))
+        self.log.debug("Job %s: job root at %s" %
+                       (self.job.unique, self.jobdir.root))
+        args = json.loads(self.job.arguments)
+        tasks = []
+        for project in args['projects']:
+            self.log.debug("Job %s: updating project %s" %
+                           (self.job.unique, project['name']))
+            tasks.append(self.launcher_server.update(
+                project['name'], project['url']))
+        for task in tasks:
+            task.wait()
+
+        self.log.debug("Job %s: git updates complete" % (self.job.unique,))
+        for project in args['projects']:
+            self.log.debug("Cloning %s" % (project['name'],))
+            repo = git.Repo.clone_from(
+                os.path.join(self.launcher_server.merge_root,
+                             project['name']),
+                os.path.join(self.jobdir.src_root,
+                             project['name']))
+            repo.remotes.origin.config_writer.set('url', project['url'])
+
+        # Get a merger in order to update the repos involved in this job.
+        merger = self.launcher_server._getMerger(self.jobdir.src_root)
+        merge_items = [i for i in args['items'] if i.get('refspec')]
+        if merge_items:
+            commit = merger.mergeChanges(merge_items)  # noqa
+        else:
+            commit = args['items'][-1]['newrev']  # noqa
+
+        # is the playbook in a repo that we have already prepared?
+        self.preparePlaybookRepos(args)
+
+        self.prepareRoles(args)
+
+        # TODOv3: Ansible the ansible thing here.
+        self.prepareAnsibleFiles(args)
+
+        data = {
+            'manager': self.launcher_server.hostname,
+            'url': 'https://server/job/{}/0/'.format(args['job']),
+            'worker_name': 'My Worker',
+        }
+
+        # TODOv3:
+        # 'name': self.name,
+        # 'manager': self.launch_server.hostname,
+        # 'worker_name': 'My Worker',
+        # 'worker_hostname': 'localhost',
+        # 'worker_ips': ['127.0.0.1', '192.168.1.1'],
+        # 'worker_fqdn': 'zuul.example.org',
+        # 'worker_program': 'FakeBuilder',
+        # 'worker_version': 'v1.1',
+        # 'worker_extra': {'something': 'else'}
+
+        self.job.sendWorkData(json.dumps(data))
+        self.job.sendWorkStatus(0, 100)
+
+        result = self.runPlaybooks(args)
+
+        if result is None:
+            self.job.sendWorkFail()
+            return
+        result = dict(result=result)
+        self.job.sendWorkComplete(json.dumps(result))
+
+    def runPlaybooks(self, args):
+        result = None
+
+        for playbook in self.jobdir.pre_playbooks:
+            # TODOv3(pabelanger): Implement pre-run timeout setting.
+            pre_status, pre_code = self.runAnsiblePlaybook(
+                playbook, args['timeout'])
+            if pre_status != self.RESULT_NORMAL or pre_code != 0:
+                # These should really never fail, so return None and have
+                # zuul try again
+                return result
+
+        job_status, job_code = self.runAnsiblePlaybook(
+            self.jobdir.playbook, args['timeout'])
+        if job_status == self.RESULT_TIMED_OUT:
+            return 'TIMED_OUT'
+        if job_status == self.RESULT_ABORTED:
+            return 'ABORTED'
+        if job_status != self.RESULT_NORMAL:
+            # The result of the job is indeterminate.  Zuul will
+            # run it again.
+            return result
+
+        success = (job_code == 0)
+        if success:
+            result = 'SUCCESS'
+        else:
+            result = 'FAILURE'
+
+        for playbook in self.jobdir.post_playbooks:
+            # TODOv3(pabelanger): Implement post-run timeout setting.
+            post_status, post_code = self.runAnsiblePlaybook(
+                playbook, args['timeout'], success)
+            if post_status != self.RESULT_NORMAL or post_code != 0:
+                result = 'POST_FAILURE'
+        return result
+
+    def getHostList(self, args):
+        # TODO(clarkb): This prefers v4 because we're not sure if we
+        # expect v6 to work.  If we can determine how to prefer v6
+        hosts = []
+        for node in args['nodes']:
+            ip = node.get('public_ipv4')
+            if not ip:
+                ip = node.get('public_ipv6')
+            hosts.append((node['name'], dict(ansible_host=ip)))
+        return hosts
+
+    def _blockPluginDirs(self, path):
+        '''Prevent execution of playbooks or roles with plugins
+
+        Plugins are loaded from roles and also if there is a plugin
+        dir adjacent to the playbook.  Throw an error if the path
+        contains a location that would cause a plugin to get loaded.
+
+        '''
+        for entry in os.listdir(path):
+            if os.path.isdir(entry) and entry.endswith('_plugins'):
+                raise Exception(
+                    "Ansible plugin dir %s found adjacent to playbook %s in"
+                    " non-trusted repo." % (entry, path))
+
+    def findPlaybook(self, path, required=False, trusted=False):
+        for ext in ['.yaml', '.yml']:
+            fn = path + ext
+            if os.path.exists(fn):
+                if not trusted:
+                    playbook_dir = os.path.dirname(os.path.abspath(fn))
+                    self._blockPluginDirs(playbook_dir)
+                return fn
+        if required:
+            raise Exception("Unable to find playbook %s" % path)
+        return None
+
+    def preparePlaybookRepos(self, args):
+        for playbook in args['pre_playbooks']:
+            jobdir_playbook = self.jobdir.addPrePlaybook()
+            self.preparePlaybookRepo(jobdir_playbook, playbook,
+                                     args, required=True)
+
+        for playbook in args['playbooks']:
+            jobdir_playbook = self.jobdir.addPlaybook()
+            self.preparePlaybookRepo(jobdir_playbook, playbook,
+                                     args, required=False)
+            if jobdir_playbook.path is not None:
+                self.jobdir.playbook = jobdir_playbook
+                break
+        if self.jobdir.playbook is None:
+            raise Exception("No valid playbook found")
+
+        for playbook in args['post_playbooks']:
+            jobdir_playbook = self.jobdir.addPostPlaybook()
+            self.preparePlaybookRepo(jobdir_playbook, playbook,
+                                     args, required=True)
+
+    def preparePlaybookRepo(self, jobdir_playbook, playbook, args, required):
+        self.log.debug("Prepare playbook repo for %s" % (playbook,))
+        # Check out the playbook repo if needed and set the path to
+        # the playbook that should be run.
+        jobdir_playbook.trusted = playbook['trusted']
+        source = self.launcher_server.connections.getSource(
+            playbook['connection'])
+        project = source.getProject(playbook['project'])
+        # TODO(jeblair): construct the url in the merger itself
+        url = source.getGitUrl(project)
+        if not playbook['trusted']:
+            # This is a project repo, so it is safe to use the already
+            # checked out version (from speculative merging) of the
+            # playbook
+            for i in args['items']:
+                if (i['connection_name'] == playbook['connection'] and
+                    i['project'] == playbook['project']):
+                    # We already have this repo prepared
+                    path = os.path.join(self.jobdir.src_root,
+                                        project.name,
+                                        playbook['path'])
+                    jobdir_playbook.path = self.findPlaybook(
+                        path,
+                        required=required,
+                        trusted=playbook['trusted'])
+                    return
+        # The playbook repo is either a config repo, or it isn't in
+        # the stack of changes we are testing, so check out the branch
+        # tip into a dedicated space.
+
+        merger = self.launcher_server._getMerger(jobdir_playbook.root)
+        merger.checkoutBranch(project.name, url, playbook['branch'])
+
+        path = os.path.join(jobdir_playbook.root,
+                            project.name,
+                            playbook['path'])
+        jobdir_playbook.path = self.findPlaybook(
+            path,
+            required=required,
+            trusted=playbook['trusted'])
+
+    def prepareRoles(self, args):
+        for role in args['roles']:
+            if role['type'] == 'zuul':
+                root = self.jobdir.addRole()
+                self.prepareZuulRole(args, role, root)
+
+    def findRole(self, path, trusted=False):
+        d = os.path.join(path, 'tasks')
+        if os.path.isdir(d):
+            # This is a bare role
+            if not trusted:
+                self._blockPluginDirs(path)
+            # None signifies that the repo is a bare role
+            return None
+        d = os.path.join(path, 'roles')
+        if os.path.isdir(d):
+            # This repo has a collection of roles
+            if not trusted:
+                for entry in os.listdir(d):
+                    self._blockPluginDirs(os.path.join(d, entry))
+            return d
+        # We assume the repository itself is a collection of roles
+        if not trusted:
+            for entry in os.listdir(path):
+                self._blockPluginDirs(os.path.join(path, entry))
+        return path
+
+    def prepareZuulRole(self, args, role, root):
+        self.log.debug("Prepare zuul role for %s" % (role,))
+        # Check out the role repo if needed
+        source = self.launcher_server.connections.getSource(
+            role['connection'])
+        project = source.getProject(role['project'])
+        # TODO(jeblair): construct the url in the merger itself
+        url = source.getGitUrl(project)
+        role_repo = None
+        if not role['trusted']:
+            # This is a project repo, so it is safe to use the already
+            # checked out version (from speculative merging) of the
+            # role
+
+            for i in args['items']:
+                if (i['connection_name'] == role['connection'] and
+                    i['project'] == role['project']):
+                    # We already have this repo prepared;
+                    # copy it into location.
+
+                    path = os.path.join(self.jobdir.src_root,
+                                        project.name)
+                    link = os.path.join(root, role['name'])
+                    os.symlink(path, link)
+                    role_repo = link
+                    break
+
+        # The role repo is either a config repo, or it isn't in
+        # the stack of changes we are testing, so check out the branch
+        # tip into a dedicated space.
+
+        if not role_repo:
+            merger = self.launcher_server._getMerger(root)
+            merger.checkoutBranch(project.name, url, 'master')
+            role_repo = os.path.join(root, project.name)
+
+        role_path = self.findRole(role_repo, trusted=role['trusted'])
+        if role_path is None:
+            # In the case of a bare role, add the containing directory
+            role_path = root
+        self.jobdir.roles_path.append(role_path)
+
+    def prepareAnsibleFiles(self, args):
+        with open(self.jobdir.inventory, 'w') as inventory:
+            for host_name, host_vars in self.getHostList(args):
+                inventory.write(host_name)
+                inventory.write(' ')
+                for k, v in host_vars.items():
+                    inventory.write('%s=%s' % (k, v))
+                inventory.write('\n')
+                if 'ansible_host' in host_vars:
+                    os.system("ssh-keyscan %s >> %s" % (
+                        host_vars['ansible_host'],
+                        self.jobdir.known_hosts))
+
+        with open(self.jobdir.vars, 'w') as vars_yaml:
+            zuul_vars = dict(args['vars'])
+            zuul_vars['zuul']['launcher'] = dict(src_root=self.jobdir.src_root,
+                                                 log_root=self.jobdir.log_root)
+            vars_yaml.write(
+                yaml.safe_dump(zuul_vars, default_flow_style=False))
+        self.writeAnsibleConfig(self.jobdir.untrusted_config)
+        self.writeAnsibleConfig(self.jobdir.trusted_config, trusted=True)
+
+    def writeAnsibleConfig(self, config_path, trusted=False):
+        with open(config_path, 'w') as config:
+            config.write('[defaults]\n')
+            config.write('hostfile = %s\n' % self.jobdir.inventory)
+            config.write('local_tmp = %s/.ansible/local_tmp\n' %
+                         self.jobdir.root)
+            config.write('remote_tmp = %s/.ansible/remote_tmp\n' %
+                         self.jobdir.root)
+            config.write('private_key_file = %s\n' % self.private_key_file)
+            config.write('retry_files_enabled = False\n')
+            config.write('log_path = %s\n' % self.jobdir.ansible_log)
+            config.write('gathering = explicit\n')
+            config.write('library = %s\n'
+                         % self.launcher_server.library_dir)
+            if self.jobdir.roles_path:
+                config.write('roles_path = %s\n' %
+                             ':'.join(self.jobdir.roles_path))
+            config.write('callback_plugins = %s\n'
+                         % self.launcher_server.callback_dir)
+            config.write('stdout_callback = zuul_stream\n')
+            # bump the timeout because busy nodes may take more than
+            # 10s to respond
+            config.write('timeout = 30\n')
+            if not trusted:
+                config.write('action_plugins = %s\n'
+                             % self.launcher_server.action_dir)
+
+            # On trusted jobs, we want to prevent the printing of args,
+            # since trusted jobs might have access to secrets that they may
+            # need to pass to a task or a role. On the other hand, there
+            # should be no sensitive data in untrusted jobs, and printing
+            # the args could be useful for debugging.
+            config.write('display_args_to_stdout = %s\n' %
+                         str(not trusted))
+
+            config.write('[ssh_connection]\n')
+            # NB: when setting pipelining = True, keep_remote_files
+            # must be False (the default).  Otherwise it apparently
+            # will override the pipelining option and effectively
+            # disable it.  Pipelining has a side effect of running the
+            # command without a tty (ie, without the -tt argument to
+            # ssh).  We require this behavior so that if a job runs a
+            # command which expects interactive input on a tty (such
+            # as sudo) it does not hang.
+            config.write('pipelining = True\n')
+            ssh_args = "-o ControlMaster=auto -o ControlPersist=60s " \
+                "-o UserKnownHostsFile=%s" % self.jobdir.known_hosts
+            config.write('ssh_args = %s\n' % ssh_args)
+
+    def _ansibleTimeout(self, msg):
+        self.log.warning(msg)
+        self.abortRunningProc()
+
+    def abortRunningProc(self):
+        with self.proc_lock:
+            if not self.proc:
+                self.log.debug("Abort: no process is running")
+                return
+            self.log.debug("Abort: sending kill signal to job "
+                           "process group")
+            try:
+                pgid = os.getpgid(self.proc.pid)
+                os.killpg(pgid, signal.SIGKILL)
+            except Exception:
+                self.log.exception("Exception while killing ansible process:")
+
+    def runAnsible(self, cmd, timeout, trusted=False):
+        env_copy = os.environ.copy()
+        env_copy['LOGNAME'] = 'zuul'
+
+        if trusted:
+            env_copy['ANSIBLE_CONFIG'] = self.jobdir.trusted_config
+        else:
+            env_copy['ANSIBLE_CONFIG'] = self.jobdir.untrusted_config
+
+        with self.proc_lock:
+            if self.aborted:
+                return (self.RESULT_ABORTED, None)
+            self.log.debug("Ansible command: %s" % (cmd,))
+            self.proc = subprocess.Popen(
+                cmd,
+                cwd=self.jobdir.work_root,
+                stdout=subprocess.PIPE,
+                stderr=subprocess.STDOUT,
+                preexec_fn=os.setsid,
+                env=env_copy,
+            )
+
+        ret = None
+        if timeout:
+            watchdog = Watchdog(timeout, self._ansibleTimeout,
+                                ("Ansible timeout exceeded",))
+            watchdog.start()
+        try:
+            for line in iter(self.proc.stdout.readline, b''):
+                line = line[:1024].rstrip()
+                self.log.debug("Ansible output: %s" % (line,))
+            ret = self.proc.wait()
+        finally:
+            if timeout:
+                watchdog.stop()
+        self.log.debug("Ansible exit code: %s" % (ret,))
+
+        with self.proc_lock:
+            self.proc = None
+
+        if timeout and watchdog.timed_out:
+            return (self.RESULT_TIMED_OUT, None)
+        if ret == 3:
+            # AnsibleHostUnreachable: We had a network issue connecting to
+            # our zuul-worker.
+            return (self.RESULT_UNREACHABLE, None)
+        elif ret == -9:
+            # Received abort request.
+            return (self.RESULT_ABORTED, None)
+
+        return (self.RESULT_NORMAL, ret)
+
+    def runAnsiblePlaybook(self, playbook, timeout, success=None):
+        env_copy = os.environ.copy()
+        env_copy['LOGNAME'] = 'zuul'
+
+        if False:  # TODOv3: self.options['verbose']:
+            verbose = '-vvv'
+        else:
+            verbose = '-v'
+
+        cmd = ['ansible-playbook', playbook.path]
+
+        if success is not None:
+            cmd.extend(['-e', 'success=%s' % str(bool(success))])
+
+        cmd.extend(['-e@%s' % self.jobdir.vars, verbose])
+
+        return self.runAnsible(
+            cmd=cmd, timeout=timeout, trusted=playbook.trusted)
diff --git a/zuul/layoutvalidator.py b/zuul/layoutvalidator.py
index 0292d2a..0f1a46e 100644
--- a/zuul/layoutvalidator.py
+++ b/zuul/layoutvalidator.py
@@ -25,10 +25,31 @@
     return v.Any([x], x)
 
 
-class LayoutSchema(object):
-    include = {'python-file': str}
-    includes = [include]
+class ConfigSchema(object):
+    tenant_source = v.Schema({'repos': [str]})
 
+    def validateTenantSources(self, value, path=[]):
+        if isinstance(value, dict):
+            for k, val in value.items():
+                self.validateTenantSource(val, path + [k])
+        else:
+            raise v.Invalid("Invalid tenant source", path)
+
+    def validateTenantSource(self, value, path=[]):
+        # TODOv3(jeblair): validate against connections
+        self.tenant_source(value)
+
+    def getSchema(self, data, connections=None):
+        tenant = {v.Required('name'): str,
+                  'include': toList(str),
+                  'source': self.validateTenantSources}
+
+        schema = v.Schema({'tenants': [tenant]})
+
+        return schema
+
+
+class LayoutSchema(object):
     manager = v.Any('IndependentPipelineManager',
                     'DependentPipelineManager')
 
@@ -106,7 +127,6 @@
            'attempts': int,
            'mutex': str,
            'tags': toList(str),
-           'parameter-function': str,
            'branch': toList(str),
            'files': toList(str),
            'swift': toList(swift),
@@ -344,3 +364,9 @@
                 if action in pipeline:
                     self.extraDriverValidation('reporter', pipeline[action],
                                                connections)
+
+
+class ConfigValidator(object):
+    def validate(self, data, connections=None):
+        schema = ConfigSchema().getSchema(data, connections)
+        schema(data)
diff --git a/zuul/lib/connections.py b/zuul/lib/connections.py
index 7d47775..27d8a1b 100644
--- a/zuul/lib/connections.py
+++ b/zuul/lib/connections.py
@@ -15,66 +15,124 @@
 import logging
 import re
 
-import zuul.connection.gerrit
-import zuul.connection.smtp
-import zuul.connection.sql
+import zuul.driver.zuul
+import zuul.driver.gerrit
+import zuul.driver.git
+import zuul.driver.smtp
+import zuul.driver.timer
+from zuul.connection import BaseConnection
 
 
-def configure_connections(config):
-    log = logging.getLogger("configure_connections")
-    # Register connections from the config
+class DefaultConnection(BaseConnection):
+    pass
 
-    # TODO(jhesketh): import connection modules dynamically
-    connections = {}
 
-    for section_name in config.sections():
-        con_match = re.match(r'^connection ([\'\"]?)(.*)(\1)$',
-                             section_name, re.I)
-        if not con_match:
-            continue
-        con_name = con_match.group(2)
-        con_config = dict(config.items(section_name))
+class ConnectionRegistry(object):
+    """A registry of connections"""
 
-        if 'driver' not in con_config:
-            raise Exception("No driver specified for connection %s."
-                            % con_name)
+    log = logging.getLogger("zuul.ConnectionRegistry")
 
-        con_driver = con_config['driver']
+    def __init__(self):
+        self.connections = {}
+        self.drivers = {}
 
-        # TODO(jhesketh): load the required class automatically
-        if con_driver == 'gerrit':
-            connections[con_name] = \
-                zuul.connection.gerrit.GerritConnection(con_name,
-                                                        con_config)
-        elif con_driver == 'smtp':
-            connections[con_name] = \
-                zuul.connection.smtp.SMTPConnection(con_name, con_config)
-        elif con_driver == 'sql':
-            connections[con_name] = \
-                zuul.connection.sql.SQLConnection(con_name, con_config)
-        else:
-            raise Exception("Unknown driver, %s, for connection %s"
-                            % (con_config['driver'], con_name))
+        self.registerDriver(zuul.driver.zuul.ZuulDriver())
+        self.registerDriver(zuul.driver.gerrit.GerritDriver())
+        self.registerDriver(zuul.driver.git.GitDriver())
+        self.registerDriver(zuul.driver.smtp.SMTPDriver())
+        self.registerDriver(zuul.driver.timer.TimerDriver())
 
-    # If the [gerrit] or [smtp] sections still exist, load them in as a
-    # connection named 'gerrit' or 'smtp' respectfully
+    def registerDriver(self, driver):
+        if driver.name in self.drivers:
+            raise Exception("Driver %s already registered" % driver.name)
+        self.drivers[driver.name] = driver
 
-    if 'gerrit' in config.sections():
-        if 'gerrit' in connections:
-            log.warning("The legacy [gerrit] section will be ignored in favour"
-                        " of the [connection gerrit].")
-        else:
-            connections['gerrit'] = \
-                zuul.connection.gerrit.GerritConnection(
-                    'gerrit', dict(config.items('gerrit')))
+    def registerScheduler(self, sched, load=True):
+        for driver_name, driver in self.drivers.items():
+            if hasattr(driver, 'registerScheduler'):
+                driver.registerScheduler(sched)
+        for connection_name, connection in self.connections.items():
+            connection.registerScheduler(sched)
+            if load:
+                connection.onLoad()
 
-    if 'smtp' in config.sections():
-        if 'smtp' in connections:
-            log.warning("The legacy [smtp] section will be ignored in favour"
-                        " of the [connection smtp].")
-        else:
-            connections['smtp'] = \
-                zuul.connection.smtp.SMTPConnection(
-                    'smtp', dict(config.items('smtp')))
+    def reconfigureDrivers(self, tenant):
+        for driver in self.drivers.values():
+            if hasattr(driver, 'reconfigure'):
+                driver.reconfigure(tenant)
 
-    return connections
+    def stop(self):
+        for connection_name, connection in self.connections.items():
+            connection.onStop()
+
+    def configure(self, config):
+        # Register connections from the config
+        # TODO(jhesketh): import connection modules dynamically
+        connections = {}
+
+        for section_name in config.sections():
+            con_match = re.match(r'^connection ([\'\"]?)(.*)(\1)$',
+                                 section_name, re.I)
+            if not con_match:
+                continue
+            con_name = con_match.group(2)
+            con_config = dict(config.items(section_name))
+
+            if 'driver' not in con_config:
+                raise Exception("No driver specified for connection %s."
+                                % con_name)
+
+            con_driver = con_config['driver']
+            if con_driver not in self.drivers:
+                raise Exception("Unknown driver, %s, for connection %s"
+                                % (con_config['driver'], con_name))
+
+            driver = self.drivers[con_driver]
+            connection = driver.getConnection(con_name, con_config)
+            connections[con_name] = connection
+
+        # If the [gerrit] or [smtp] sections still exist, load them in as a
+        # connection named 'gerrit' or 'smtp' respectfully
+
+        if 'gerrit' in config.sections():
+            if 'gerrit' in connections:
+                self.log.warning(
+                    "The legacy [gerrit] section will be ignored in favour"
+                    " of the [connection gerrit].")
+            else:
+                driver = self.drivers['gerrit']
+                connections['gerrit'] = \
+                    driver.getConnection(
+                        'gerrit', dict(config.items('gerrit')))
+
+        if 'smtp' in config.sections():
+            if 'smtp' in connections:
+                self.log.warning(
+                    "The legacy [smtp] section will be ignored in favour"
+                    " of the [connection smtp].")
+            else:
+                driver = self.drivers['smtp']
+                connections['smtp'] = \
+                    driver.getConnection(
+                        'smtp', dict(config.items('smtp')))
+
+        # Create default connections for drivers which need no
+        # connection information (e.g., 'timer' or 'zuul').
+        for driver in self.drivers.values():
+            if not hasattr(driver, 'getConnection'):
+                connections[driver.name] = DefaultConnection(
+                    driver, driver.name, {})
+
+        self.connections = connections
+
+    def getSource(self, connection_name):
+        connection = self.connections[connection_name]
+        return connection.driver.getSource(connection)
+
+    def getReporter(self, connection_name, config=None):
+        connection = self.connections[connection_name]
+        return connection.driver.getReporter(connection, config)
+
+    def getTrigger(self, connection_name, config=None):
+        connection = self.connections[connection_name]
+        return connection.driver.getTrigger(connection, config)
diff --git a/zuul/manager/__init__.py b/zuul/manager/__init__.py
new file mode 100644
index 0000000..4447615
--- /dev/null
+++ b/zuul/manager/__init__.py
@@ -0,0 +1,786 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+from zuul import exceptions
+from zuul.model import NullChange
+
+
+class DynamicChangeQueueContextManager(object):
+    def __init__(self, change_queue):
+        self.change_queue = change_queue
+
+    def __enter__(self):
+        return self.change_queue
+
+    def __exit__(self, etype, value, tb):
+        if self.change_queue and not self.change_queue.queue:
+            self.change_queue.pipeline.removeQueue(self.change_queue)
+
+
+class StaticChangeQueueContextManager(object):
+    def __init__(self, change_queue):
+        self.change_queue = change_queue
+
+    def __enter__(self):
+        return self.change_queue
+
+    def __exit__(self, etype, value, tb):
+        pass
+
+
+class PipelineManager(object):
+    """Abstract Base Class for enqueing and processing Changes in a Pipeline"""
+
+    log = logging.getLogger("zuul.PipelineManager")
+
+    def __init__(self, sched, pipeline):
+        self.sched = sched
+        self.pipeline = pipeline
+        self.event_filters = []
+        self.changeish_filters = []
+
+    def __str__(self):
+        return "<%s %s>" % (self.__class__.__name__, self.pipeline.name)
+
+    def _postConfig(self, layout):
+        self.log.info("Configured Pipeline Manager %s" % self.pipeline.name)
+        self.log.info("  Source: %s" % self.pipeline.source)
+        self.log.info("  Requirements:")
+        for f in self.changeish_filters:
+            self.log.info("    %s" % f)
+        self.log.info("  Events:")
+        for e in self.event_filters:
+            self.log.info("    %s" % e)
+        self.log.info("  Projects:")
+
+        def log_jobs(tree, indent=0):
+            istr = '    ' + ' ' * indent
+            if tree.job:
+                # TODOv3(jeblair): represent matchers
+                efilters = ''
+                # for b in tree.job._branches:
+                #     efilters += str(b)
+                # for f in tree.job._files:
+                #     efilters += str(f)
+                # if tree.job.skip_if_matcher:
+                #     efilters += str(tree.job.skip_if_matcher)
+                # if efilters:
+                #     efilters = ' ' + efilters
+                tags = []
+                if tree.job.hold_following_changes:
+                    tags.append('[hold]')
+                if not tree.job.voting:
+                    tags.append('[nonvoting]')
+                if tree.job.mutex:
+                    tags.append('[mutex: %s]' % tree.job.mutex)
+                tags = ' '.join(tags)
+                self.log.info("%s%s%s %s" % (istr, repr(tree.job),
+                                             efilters, tags))
+            for x in tree.job_trees:
+                log_jobs(x, indent + 2)
+
+        for project_name in layout.project_configs.keys():
+            project = self.pipeline.source.getProject(project_name)
+            tree = self.pipeline.getJobTree(project)
+            if tree:
+                self.log.info("    %s" % project)
+                log_jobs(tree)
+        self.log.info("  On start:")
+        self.log.info("    %s" % self.pipeline.start_actions)
+        self.log.info("  On success:")
+        self.log.info("    %s" % self.pipeline.success_actions)
+        self.log.info("  On failure:")
+        self.log.info("    %s" % self.pipeline.failure_actions)
+        self.log.info("  On merge-failure:")
+        self.log.info("    %s" % self.pipeline.merge_failure_actions)
+        self.log.info("  When disabled:")
+        self.log.info("    %s" % self.pipeline.disabled_actions)
+
+    def getSubmitAllowNeeds(self):
+        # Get a list of code review labels that are allowed to be
+        # "needed" in the submit records for a change, with respect
+        # to this queue.  In other words, the list of review labels
+        # this queue itself is likely to set before submitting.
+        allow_needs = set()
+        for action_reporter in self.pipeline.success_actions:
+            allow_needs.update(action_reporter.getSubmitAllowNeeds())
+        return allow_needs
+
+    def eventMatches(self, event, change):
+        if event.forced_pipeline:
+            if event.forced_pipeline == self.pipeline.name:
+                self.log.debug("Event %s for change %s was directly assigned "
+                               "to pipeline %s" % (event, change, self))
+                return True
+            else:
+                return False
+        for ef in self.event_filters:
+            if ef.matches(event, change):
+                self.log.debug("Event %s for change %s matched %s "
+                               "in pipeline %s" % (event, change, ef, self))
+                return True
+        return False
+
+    def isChangeAlreadyInPipeline(self, change):
+        # Checks live items in the pipeline
+        for item in self.pipeline.getAllItems():
+            if item.live and change.equals(item.change):
+                return True
+        return False
+
+    def isChangeAlreadyInQueue(self, change, change_queue):
+        # Checks any item in the specified change queue
+        for item in change_queue.queue:
+            if change.equals(item.change):
+                return True
+        return False
+
+    def reportStart(self, item):
+        if not self.pipeline._disabled:
+            try:
+                self.log.info("Reporting start, action %s item %s" %
+                              (self.pipeline.start_actions, item))
+                ret = self.sendReport(self.pipeline.start_actions,
+                                      self.pipeline.source, item)
+                if ret:
+                    self.log.error("Reporting item start %s received: %s" %
+                                   (item, ret))
+            except:
+                self.log.exception("Exception while reporting start:")
+
+    def sendReport(self, action_reporters, source, item,
+                   message=None):
+        """Sends the built message off to configured reporters.
+
+        Takes the action_reporters, item, message and extra options and
+        sends them to the pluggable reporters.
+        """
+        report_errors = []
+        if len(action_reporters) > 0:
+            for reporter in action_reporters:
+                ret = reporter.report(source, self.pipeline, item)
+                if ret:
+                    report_errors.append(ret)
+            if len(report_errors) == 0:
+                return
+        return report_errors
+
+    def isChangeReadyToBeEnqueued(self, change):
+        return True
+
+    def enqueueChangesAhead(self, change, quiet, ignore_requirements,
+                            change_queue):
+        return True
+
+    def enqueueChangesBehind(self, change, quiet, ignore_requirements,
+                             change_queue):
+        return True
+
+    def checkForChangesNeededBy(self, change, change_queue):
+        return True
+
+    def getFailingDependentItems(self, item):
+        return None
+
+    def getDependentItems(self, item):
+        orig_item = item
+        items = []
+        while item.item_ahead:
+            items.append(item.item_ahead)
+            item = item.item_ahead
+        self.log.info("Change %s depends on changes %s" %
+                      (orig_item.change,
+                       [x.change for x in items]))
+        return items
+
+    def getItemForChange(self, change):
+        for item in self.pipeline.getAllItems():
+            if item.change.equals(change):
+                return item
+        return None
+
+    def findOldVersionOfChangeAlreadyInQueue(self, change):
+        for item in self.pipeline.getAllItems():
+            if not item.live:
+                continue
+            if change.isUpdateOf(item.change):
+                return item
+        return None
+
+    def removeOldVersionsOfChange(self, change):
+        if not self.pipeline.dequeue_on_new_patchset:
+            return
+        old_item = self.findOldVersionOfChangeAlreadyInQueue(change)
+        if old_item:
+            self.log.debug("Change %s is a new version of %s, removing %s" %
+                           (change, old_item.change, old_item))
+            self.removeItem(old_item)
+
+    def removeAbandonedChange(self, change):
+        self.log.debug("Change %s abandoned, removing." % change)
+        for item in self.pipeline.getAllItems():
+            if not item.live:
+                continue
+            if item.change.equals(change):
+                self.removeItem(item)
+
+    def reEnqueueItem(self, item, last_head):
+        with self.getChangeQueue(item.change, last_head.queue) as change_queue:
+            if change_queue:
+                self.log.debug("Re-enqueing change %s in queue %s" %
+                               (item.change, change_queue))
+                change_queue.enqueueItem(item)
+
+                # Get an updated copy of the layout if necessary.
+                # This will return one of the following:
+                # 1) An existing layout from the item ahead or pipeline.
+                # 2) A newly created layout from the cached pipeline
+                #    layout config plus the previously returned
+                #    in-repo files stored in the buildset.
+                # 3) None in the case that a fetch of the files from
+                #    the merger is still pending.
+                item.current_build_set.layout = self.getLayout(item)
+
+                # Rebuild the frozen job tree from the new layout, if
+                # we have one.  If not, it will be built later.
+                if item.current_build_set.layout:
+                    item.freezeJobTree()
+
+                # Re-set build results in case any new jobs have been
+                # added to the tree.
+                for build in item.current_build_set.getBuilds():
+                    if build.result:
+                        item.setResult(build)
+                # Similarly, reset the item state.
+                if item.current_build_set.unable_to_merge:
+                    item.setUnableToMerge()
+                if item.current_build_set.config_error:
+                    item.setConfigError(item.current_build_set.config_error)
+                if item.dequeued_needing_change:
+                    item.setDequeuedNeedingChange()
+
+                self.reportStats(item)
+                return True
+            else:
+                self.log.error("Unable to find change queue for project %s" %
+                               item.change.project)
+                return False
+
+    def addChange(self, change, quiet=False, enqueue_time=None,
+                  ignore_requirements=False, live=True,
+                  change_queue=None):
+        self.log.debug("Considering adding change %s" % change)
+
+        # If we are adding a live change, check if it's a live item
+        # anywhere in the pipeline.  Otherwise, we will perform the
+        # duplicate check below on the specific change_queue.
+        if live and self.isChangeAlreadyInPipeline(change):
+            self.log.debug("Change %s is already in pipeline, "
+                           "ignoring" % change)
+            return True
+
+        if not self.isChangeReadyToBeEnqueued(change):
+            self.log.debug("Change %s is not ready to be enqueued, ignoring" %
+                           change)
+            return False
+
+        if not ignore_requirements:
+            for f in self.changeish_filters:
+                if not f.matches(change):
+                    self.log.debug("Change %s does not match pipeline "
+                                   "requirement %s" % (change, f))
+                    return False
+
+        with self.getChangeQueue(change, change_queue) as change_queue:
+            if not change_queue:
+                self.log.debug("Unable to find change queue for "
+                               "change %s in project %s" %
+                               (change, change.project))
+                return False
+
+            if not self.enqueueChangesAhead(change, quiet, ignore_requirements,
+                                            change_queue):
+                self.log.debug("Failed to enqueue changes "
+                               "ahead of %s" % change)
+                return False
+
+            if self.isChangeAlreadyInQueue(change, change_queue):
+                self.log.debug("Change %s is already in queue, "
+                               "ignoring" % change)
+                return True
+
+            self.log.info("Adding change %s to queue %s in %s" %
+                          (change, change_queue, self.pipeline))
+            item = change_queue.enqueueChange(change)
+            if enqueue_time:
+                item.enqueue_time = enqueue_time
+            item.live = live
+            self.reportStats(item)
+            if not quiet:
+                if len(self.pipeline.start_actions) > 0:
+                    self.reportStart(item)
+            self.enqueueChangesBehind(change, quiet, ignore_requirements,
+                                      change_queue)
+            zuul_driver = self.sched.connections.drivers['zuul']
+            tenant = self.pipeline.layout.tenant
+            zuul_driver.onChangeEnqueued(tenant, item.change, self.pipeline)
+            return True
+
+    def dequeueItem(self, item):
+        self.log.debug("Removing change %s from queue" % item.change)
+        item.queue.dequeueItem(item)
+
+    def removeItem(self, item):
+        # Remove an item from the queue, probably because it has been
+        # superseded by another change.
+        self.log.debug("Canceling builds behind change: %s "
+                       "because it is being removed." % item.change)
+        self.cancelJobs(item)
+        self.dequeueItem(item)
+        self.reportStats(item)
+
+    def provisionNodes(self, item):
+        jobs = item.findJobsToRequest()
+        if not jobs:
+            return False
+        build_set = item.current_build_set
+        self.log.debug("Requesting nodes for change %s" % item.change)
+        for job in jobs:
+            req = self.sched.nodepool.requestNodes(build_set, job)
+            self.log.debug("Adding node request %s for job %s to item %s" %
+                           (req, job, item))
+            build_set.setJobNodeRequest(job.name, req)
+        return True
+
+    def _launchJobs(self, item, jobs):
+        self.log.debug("Launching jobs for change %s" % item.change)
+        dependent_items = self.getDependentItems(item)
+        for job in jobs:
+            self.log.debug("Found job %s for change %s" % (job, item.change))
+            try:
+                nodeset = item.current_build_set.getJobNodeSet(job.name)
+                self.sched.nodepool.useNodeSet(nodeset)
+                build = self.sched.launcher.launch(job, item,
+                                                   self.pipeline,
+                                                   dependent_items)
+                self.log.debug("Adding build %s of job %s to item %s" %
+                               (build, job, item))
+                item.addBuild(build)
+            except:
+                self.log.exception("Exception while launching job %s "
+                                   "for change %s:" % (job, item.change))
+
+    def launchJobs(self, item):
+        # TODO(jeblair): This should return a value indicating a job
+        # was launched.  Appears to be a longstanding bug.
+        if not item.current_build_set.layout:
+            return False
+
+        jobs = item.findJobsToRun(self.sched.mutex)
+        if jobs:
+            self._launchJobs(item, jobs)
+
+    def cancelJobs(self, item, prime=True):
+        self.log.debug("Cancel jobs for change %s" % item.change)
+        canceled = False
+        old_build_set = item.current_build_set
+        if prime and item.current_build_set.ref:
+            item.resetAllBuilds()
+        for req in old_build_set.node_requests.values():
+            self.sched.nodepool.cancelRequest(req)
+        old_build_set.node_requests = {}
+        canceled_jobs = set()
+        for build in old_build_set.getBuilds():
+            if build.result:
+                canceled_jobs.add(build.job.name)
+                continue
+            was_running = False
+            try:
+                was_running = self.sched.launcher.cancel(build)
+            except:
+                self.log.exception("Exception while canceling build %s "
+                                   "for change %s" % (build, item.change))
+            finally:
+                self.sched.mutex.release(build.build_set.item, build.job)
+
+            if not was_running:
+                try:
+                    nodeset = build.build_set.getJobNodeSet(build.job.name)
+                    self.sched.nodepool.returnNodeSet(nodeset)
+                except Exception:
+                    self.log.exception("Unable to return nodeset %s for "
+                                       "canceled build request %s" %
+                                       (nodeset, build))
+            build.result = 'CANCELED'
+            canceled = True
+            canceled_jobs.add(build.job.name)
+        for jobname, nodeset in old_build_set.nodesets.items()[:]:
+            if jobname in canceled_jobs:
+                continue
+            self.sched.nodepool.returnNodeSet(nodeset)
+        for item_behind in item.items_behind:
+            self.log.debug("Canceling jobs for change %s, behind change %s" %
+                           (item_behind.change, item.change))
+            if self.cancelJobs(item_behind, prime=prime):
+                canceled = True
+        return canceled
+
+    def _makeMergerItem(self, item):
+        # Create a dictionary with all info about the item needed by
+        # the merger.
+        number = None
+        patchset = None
+        oldrev = None
+        newrev = None
+        if hasattr(item.change, 'number'):
+            number = item.change.number
+            patchset = item.change.patchset
+        elif hasattr(item.change, 'newrev'):
+            oldrev = item.change.oldrev
+            newrev = item.change.newrev
+        connection_name = self.pipeline.source.connection.connection_name
+
+        project = item.change.project.name
+        return dict(project=project,
+                    url=self.pipeline.source.getGitUrl(
+                        item.change.project),
+                    connection_name=connection_name,
+                    merge_mode=item.current_build_set.getMergeMode(project),
+                    refspec=item.change.refspec,
+                    branch=item.change.branch,
+                    ref=item.current_build_set.ref,
+                    number=number,
+                    patchset=patchset,
+                    oldrev=oldrev,
+                    newrev=newrev,
+                    )
+
+    def getLayout(self, item):
+        if not item.change.updatesConfig():
+            if item.item_ahead:
+                return item.item_ahead.current_build_set.layout
+            else:
+                return item.queue.pipeline.layout
+        # This item updates the config, ask the merger for the result.
+        build_set = item.current_build_set
+        if build_set.merge_state == build_set.PENDING:
+            return None
+        if build_set.merge_state == build_set.COMPLETE:
+            if build_set.unable_to_merge:
+                return None
+            # Load layout
+            # Late import to break an import loop
+            import zuul.configloader
+            loader = zuul.configloader.ConfigLoader()
+            self.log.debug("Load dynamic layout with %s" % build_set.files)
+            try:
+                layout = loader.createDynamicLayout(
+                    item.pipeline.layout.tenant,
+                    build_set.files)
+            except zuul.configloader.ConfigurationSyntaxError as e:
+                self.log.info("Configuration syntax error "
+                              "in dynamic layout %s" %
+                              build_set.files)
+                item.setConfigError(str(e))
+                return None
+            except Exception:
+                self.log.exception("Error in dynamic layout %s" %
+                                   build_set.files)
+                item.setConfigError("Unknown configuration error")
+                return None
+            return layout
+        build_set.merge_state = build_set.PENDING
+        self.log.debug("Preparing dynamic layout for: %s" % item.change)
+        dependent_items = self.getDependentItems(item)
+        dependent_items.reverse()
+        all_items = dependent_items + [item]
+        merger_items = map(self._makeMergerItem, all_items)
+        self.sched.merger.mergeChanges(merger_items,
+                                       item.current_build_set,
+                                       ['.zuul.yaml'],
+                                       self.pipeline.precedence)
+
+    def prepareLayout(self, item):
+        # Get a copy of the layout in the context of the current
+        # queue.
+        # Returns True if the ref is ready, false otherwise
+        if not item.current_build_set.ref:
+            item.current_build_set.setConfiguration()
+        if not item.current_build_set.layout:
+            item.current_build_set.layout = self.getLayout(item)
+        if not item.current_build_set.layout:
+            return False
+        if not item.job_tree:
+            item.freezeJobTree()
+        return True
+
+    def _processOneItem(self, item, nnfi):
+        changed = False
+        item_ahead = item.item_ahead
+        if item_ahead and (not item_ahead.live):
+            item_ahead = None
+        change_queue = item.queue
+        failing_reasons = []  # Reasons this item is failing
+
+        if self.checkForChangesNeededBy(item.change, change_queue) is not True:
+            # It's not okay to enqueue this change, we should remove it.
+            self.log.info("Dequeuing change %s because "
+                          "it can no longer merge" % item.change)
+            self.cancelJobs(item)
+            self.dequeueItem(item)
+            item.setDequeuedNeedingChange()
+            if item.live:
+                try:
+                    self.reportItem(item)
+                except exceptions.MergeFailure:
+                    pass
+            return (True, nnfi)
+        dep_items = self.getFailingDependentItems(item)
+        actionable = change_queue.isActionable(item)
+        item.active = actionable
+        ready = False
+        if dep_items:
+            failing_reasons.append('a needed change is failing')
+            self.cancelJobs(item, prime=False)
+        else:
+            item_ahead_merged = False
+            if (item_ahead and item_ahead.change.is_merged):
+                item_ahead_merged = True
+            if (item_ahead != nnfi and not item_ahead_merged):
+                # Our current base is different than what we expected,
+                # and it's not because our current base merged.  Something
+                # ahead must have failed.
+                self.log.info("Resetting builds for change %s because the "
+                              "item ahead, %s, is not the nearest non-failing "
+                              "item, %s" % (item.change, item_ahead, nnfi))
+                change_queue.moveItem(item, nnfi)
+                changed = True
+                self.cancelJobs(item)
+            if actionable:
+                ready = self.prepareLayout(item)
+                if item.current_build_set.unable_to_merge:
+                    failing_reasons.append("it has a merge conflict")
+                if item.current_build_set.config_error:
+                    failing_reasons.append("it has an invalid configuration")
+                if ready and self.provisionNodes(item):
+                    changed = True
+        if actionable and ready and self.launchJobs(item):
+            changed = True
+        if item.didAnyJobFail():
+            failing_reasons.append("at least one job failed")
+        if (not item.live) and (not item.items_behind):
+            failing_reasons.append("is a non-live item with no items behind")
+            self.dequeueItem(item)
+            changed = True
+        if ((not item_ahead) and item.areAllJobsComplete() and item.live):
+            try:
+                self.reportItem(item)
+            except exceptions.MergeFailure:
+                failing_reasons.append("it did not merge")
+                for item_behind in item.items_behind:
+                    self.log.info("Resetting builds for change %s because the "
+                                  "item ahead, %s, failed to merge" %
+                                  (item_behind.change, item))
+                    self.cancelJobs(item_behind)
+            self.dequeueItem(item)
+            changed = True
+        elif not failing_reasons and item.live:
+            nnfi = item
+        item.current_build_set.failing_reasons = failing_reasons
+        if failing_reasons:
+            self.log.debug("%s is a failing item because %s" %
+                           (item, failing_reasons))
+        return (changed, nnfi)
+
+    def processQueue(self):
+        # Do whatever needs to be done for each change in the queue
+        self.log.debug("Starting queue processor: %s" % self.pipeline.name)
+        changed = False
+        for queue in self.pipeline.queues:
+            queue_changed = False
+            nnfi = None  # Nearest non-failing item
+            for item in queue.queue[:]:
+                item_changed, nnfi = self._processOneItem(
+                    item, nnfi)
+                if item_changed:
+                    queue_changed = True
+                self.reportStats(item)
+            if queue_changed:
+                changed = True
+                status = ''
+                for item in queue.queue:
+                    status += item.formatStatus()
+                if status:
+                    self.log.debug("Queue %s status is now:\n %s" %
+                                   (queue.name, status))
+        self.log.debug("Finished queue processor: %s (changed: %s)" %
+                       (self.pipeline.name, changed))
+        return changed
+
+    def onBuildStarted(self, build):
+        self.log.debug("Build %s started" % build)
+        return True
+
+    def onBuildCompleted(self, build):
+        self.log.debug("Build %s completed" % build)
+        item = build.build_set.item
+
+        item.setResult(build)
+        self.sched.mutex.release(item, build.job)
+        self.log.debug("Item %s status is now:\n %s" %
+                       (item, item.formatStatus()))
+
+        if build.retry:
+            build.build_set.removeJobNodeSet(build.job.name)
+
+        return True
+
+    def onMergeCompleted(self, event):
+        build_set = event.build_set
+        item = build_set.item
+        build_set.merge_state = build_set.COMPLETE
+        build_set.zuul_url = event.zuul_url
+        if event.merged:
+            build_set.commit = event.commit
+            build_set.files.setFiles(event.files)
+        elif event.updated:
+            if not isinstance(item.change, NullChange):
+                build_set.commit = item.change.newrev
+        if not build_set.commit and not isinstance(item.change, NullChange):
+            self.log.info("Unable to merge change %s" % item.change)
+            item.setUnableToMerge()
+
+    def onNodesProvisioned(self, event):
+        # TODOv3(jeblair): handle provisioning failure here
+        request = event.request
+        build_set = request.build_set
+        build_set.jobNodeRequestComplete(request.job.name, request,
+                                         request.nodeset)
+        if request.failed or not request.fulfilled:
+            self.log.info("Node request failure for %s" %
+                          (request.job.name,))
+            build_set.item.setNodeRequestFailure(request.job)
+        self.log.info("Completed node request %s for job %s of item %s "
+                      "with nodes %s" %
+                      (request, request.job, build_set.item,
+                       request.nodeset))
+
+    def reportItem(self, item):
+        if not item.reported:
+            # _reportItem() returns True if it failed to report.
+            item.reported = not self._reportItem(item)
+        if self.changes_merge:
+            succeeded = item.didAllJobsSucceed()
+            merged = item.reported
+            if merged:
+                merged = self.pipeline.source.isMerged(item.change,
+                                                       item.change.branch)
+            self.log.info("Reported change %s status: all-succeeded: %s, "
+                          "merged: %s" % (item.change, succeeded, merged))
+            change_queue = item.queue
+            if not (succeeded and merged):
+                self.log.debug("Reported change %s failed tests or failed "
+                               "to merge" % (item.change))
+                change_queue.decreaseWindowSize()
+                self.log.debug("%s window size decreased to %s" %
+                               (change_queue, change_queue.window))
+                raise exceptions.MergeFailure(
+                    "Change %s failed to merge" % item.change)
+            else:
+                change_queue.increaseWindowSize()
+                self.log.debug("%s window size increased to %s" %
+                               (change_queue, change_queue.window))
+
+                zuul_driver = self.sched.connections.drivers['zuul']
+                tenant = self.pipeline.layout.tenant
+                zuul_driver.onChangeMerged(tenant, item.change,
+                                           self.pipeline.source)
+
+    def _reportItem(self, item):
+        self.log.debug("Reporting change %s" % item.change)
+        ret = True  # Means error as returned by trigger.report
+        if item.getConfigError():
+            self.log.debug("Invalid config for change %s" % item.change)
+            # TODOv3(jeblair): consider a new reporter action for this
+            actions = self.pipeline.merge_failure_actions
+            item.setReportedResult('CONFIG_ERROR')
+        elif not item.getJobs():
+            # We don't send empty reports with +1,
+            # and the same for -1's (merge failures or transient errors)
+            # as they cannot be followed by +1's
+            self.log.debug("No jobs for change %s" % item.change)
+            actions = []
+        elif item.didAllJobsSucceed():
+            self.log.debug("success %s" % (self.pipeline.success_actions))
+            actions = self.pipeline.success_actions
+            item.setReportedResult('SUCCESS')
+            self.pipeline._consecutive_failures = 0
+        elif item.didMergerFail():
+            actions = self.pipeline.merge_failure_actions
+            item.setReportedResult('MERGER_FAILURE')
+        else:
+            actions = self.pipeline.failure_actions
+            item.setReportedResult('FAILURE')
+            self.pipeline._consecutive_failures += 1
+        if self.pipeline._disabled:
+            actions = self.pipeline.disabled_actions
+        # Check here if we should disable so that we only use the disabled
+        # reporters /after/ the last disable_at failure is still reported as
+        # normal.
+        if (self.pipeline.disable_at and not self.pipeline._disabled and
+            self.pipeline._consecutive_failures >= self.pipeline.disable_at):
+            self.pipeline._disabled = True
+        if actions:
+            try:
+                self.log.info("Reporting item %s, actions: %s" %
+                              (item, actions))
+                ret = self.sendReport(actions, self.pipeline.source, item)
+                if ret:
+                    self.log.error("Reporting item %s received: %s" %
+                                   (item, ret))
+            except:
+                self.log.exception("Exception while reporting:")
+                item.setReportedResult('ERROR')
+        return ret
+
+    def reportStats(self, item):
+        if not self.sched.statsd:
+            return
+        try:
+            # Update the gauge on enqueue and dequeue, but timers only
+            # when dequeing.
+            if item.dequeue_time:
+                dt = int((item.dequeue_time - item.enqueue_time) * 1000)
+            else:
+                dt = None
+            items = len(self.pipeline.getAllItems())
+
+            # stats.timers.zuul.pipeline.NAME.resident_time
+            # stats_counts.zuul.pipeline.NAME.total_changes
+            # stats.gauges.zuul.pipeline.NAME.current_changes
+            key = 'zuul.pipeline.%s' % self.pipeline.name
+            self.sched.statsd.gauge(key + '.current_changes', items)
+            if dt:
+                self.sched.statsd.timing(key + '.resident_time', dt)
+                self.sched.statsd.incr(key + '.total_changes')
+
+            # stats.timers.zuul.pipeline.NAME.ORG.PROJECT.resident_time
+            # stats_counts.zuul.pipeline.NAME.ORG.PROJECT.total_changes
+            project_name = item.change.project.name.replace('/', '.')
+            key += '.%s' % project_name
+            if dt:
+                self.sched.statsd.timing(key + '.resident_time', dt)
+                self.sched.statsd.incr(key + '.total_changes')
+        except:
+            self.log.exception("Exception reporting pipeline stats")
diff --git a/zuul/manager/dependent.py b/zuul/manager/dependent.py
new file mode 100644
index 0000000..3d006c2
--- /dev/null
+++ b/zuul/manager/dependent.py
@@ -0,0 +1,192 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+from zuul import model
+from zuul.manager import PipelineManager, StaticChangeQueueContextManager
+
+
+class DependentPipelineManager(PipelineManager):
+    """PipelineManager for handling interrelated Changes.
+
+    The DependentPipelineManager puts Changes that share a Pipeline
+    into a shared :py:class:`~zuul.model.ChangeQueue`. It them processes them
+    using the Optmistic Branch Prediction logic with Nearest Non-Failing Item
+    reparenting algorithm for handling errors.
+    """
+    log = logging.getLogger("zuul.DependentPipelineManager")
+    changes_merge = True
+
+    def __init__(self, *args, **kwargs):
+        super(DependentPipelineManager, self).__init__(*args, **kwargs)
+
+    def _postConfig(self, layout):
+        super(DependentPipelineManager, self)._postConfig(layout)
+        self.buildChangeQueues()
+
+    def buildChangeQueues(self):
+        self.log.debug("Building shared change queues")
+        change_queues = {}
+        project_configs = self.pipeline.layout.project_configs
+
+        for project in self.pipeline.getProjects():
+            project_config = project_configs[project.name]
+            project_pipeline_config = project_config.pipelines[
+                self.pipeline.name]
+            queue_name = project_pipeline_config.queue_name
+            if queue_name and queue_name in change_queues:
+                change_queue = change_queues[queue_name]
+            else:
+                p = self.pipeline
+                change_queue = model.ChangeQueue(
+                    p,
+                    window=p.window,
+                    window_floor=p.window_floor,
+                    window_increase_type=p.window_increase_type,
+                    window_increase_factor=p.window_increase_factor,
+                    window_decrease_type=p.window_decrease_type,
+                    window_decrease_factor=p.window_decrease_factor,
+                    name=queue_name)
+                if queue_name:
+                    # If this is a named queue, keep track of it in
+                    # case it is referenced again.  Otherwise, it will
+                    # have a name automatically generated from its
+                    # constituent projects.
+                    change_queues[queue_name] = change_queue
+                self.pipeline.addQueue(change_queue)
+                self.log.debug("Created queue: %s" % change_queue)
+            change_queue.addProject(project)
+            self.log.debug("Added project %s to queue: %s" %
+                           (project, change_queue))
+
+    def getChangeQueue(self, change, existing=None):
+        if existing:
+            return StaticChangeQueueContextManager(existing)
+        return StaticChangeQueueContextManager(
+            self.pipeline.getQueue(change.project))
+
+    def isChangeReadyToBeEnqueued(self, change):
+        if not self.pipeline.source.canMerge(change,
+                                             self.getSubmitAllowNeeds()):
+            self.log.debug("Change %s can not merge, ignoring" % change)
+            return False
+        return True
+
+    def enqueueChangesBehind(self, change, quiet, ignore_requirements,
+                             change_queue):
+        to_enqueue = []
+        self.log.debug("Checking for changes needing %s:" % change)
+        if not hasattr(change, 'needed_by_changes'):
+            self.log.debug("  Changeish does not support dependencies")
+            return
+        for other_change in change.needed_by_changes:
+            with self.getChangeQueue(other_change) as other_change_queue:
+                if other_change_queue != change_queue:
+                    self.log.debug("  Change %s in project %s can not be "
+                                   "enqueued in the target queue %s" %
+                                   (other_change, other_change.project,
+                                    change_queue))
+                    continue
+            if self.pipeline.source.canMerge(other_change,
+                                             self.getSubmitAllowNeeds()):
+                self.log.debug("  Change %s needs %s and is ready to merge" %
+                               (other_change, change))
+                to_enqueue.append(other_change)
+
+        if not to_enqueue:
+            self.log.debug("  No changes need %s" % change)
+
+        for other_change in to_enqueue:
+            self.addChange(other_change, quiet=quiet,
+                           ignore_requirements=ignore_requirements,
+                           change_queue=change_queue)
+
+    def enqueueChangesAhead(self, change, quiet, ignore_requirements,
+                            change_queue):
+        ret = self.checkForChangesNeededBy(change, change_queue)
+        if ret in [True, False]:
+            return ret
+        self.log.debug("  Changes %s must be merged ahead of %s" %
+                       (ret, change))
+        for needed_change in ret:
+            r = self.addChange(needed_change, quiet=quiet,
+                               ignore_requirements=ignore_requirements,
+                               change_queue=change_queue)
+            if not r:
+                return False
+        return True
+
+    def checkForChangesNeededBy(self, change, change_queue):
+        self.log.debug("Checking for changes needed by %s:" % change)
+        # Return true if okay to proceed enqueing this change,
+        # false if the change should not be enqueued.
+        if not hasattr(change, 'needs_changes'):
+            self.log.debug("  Changeish does not support dependencies")
+            return True
+        if not change.needs_changes:
+            self.log.debug("  No changes needed")
+            return True
+        changes_needed = []
+        # Ignore supplied change_queue
+        with self.getChangeQueue(change) as change_queue:
+            for needed_change in change.needs_changes:
+                self.log.debug("  Change %s needs change %s:" % (
+                    change, needed_change))
+                if needed_change.is_merged:
+                    self.log.debug("  Needed change is merged")
+                    continue
+                with self.getChangeQueue(needed_change) as needed_change_queue:
+                    if needed_change_queue != change_queue:
+                        self.log.debug("  Change %s in project %s does not "
+                                       "share a change queue with %s "
+                                       "in project %s" %
+                                       (needed_change, needed_change.project,
+                                        change, change.project))
+                        return False
+                if not needed_change.is_current_patchset:
+                    self.log.debug("  Needed change is not the "
+                                   "current patchset")
+                    return False
+                if self.isChangeAlreadyInQueue(needed_change, change_queue):
+                    self.log.debug("  Needed change is already ahead "
+                                   "in the queue")
+                    continue
+                if self.pipeline.source.canMerge(needed_change,
+                                                 self.getSubmitAllowNeeds()):
+                    self.log.debug("  Change %s is needed" % needed_change)
+                    if needed_change not in changes_needed:
+                        changes_needed.append(needed_change)
+                        continue
+                # The needed change can't be merged.
+                self.log.debug("  Change %s is needed but can not be merged" %
+                               needed_change)
+                return False
+        if changes_needed:
+            return changes_needed
+        return True
+
+    def getFailingDependentItems(self, item):
+        if not hasattr(item.change, 'needs_changes'):
+            return None
+        if not item.change.needs_changes:
+            return None
+        failing_items = set()
+        for needed_change in item.change.needs_changes:
+            needed_item = self.getItemForChange(needed_change)
+            if not needed_item:
+                continue
+            if needed_item.current_build_set.failing_reasons:
+                failing_items.add(needed_item)
+        if failing_items:
+            return failing_items
+        return None
diff --git a/zuul/manager/independent.py b/zuul/manager/independent.py
new file mode 100644
index 0000000..3d28327
--- /dev/null
+++ b/zuul/manager/independent.py
@@ -0,0 +1,95 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+from zuul import model
+from zuul.manager import PipelineManager, DynamicChangeQueueContextManager
+
+
+class IndependentPipelineManager(PipelineManager):
+    """PipelineManager that puts every Change into its own ChangeQueue."""
+
+    log = logging.getLogger("zuul.IndependentPipelineManager")
+    changes_merge = False
+
+    def _postConfig(self, layout):
+        super(IndependentPipelineManager, self)._postConfig(layout)
+
+    def getChangeQueue(self, change, existing=None):
+        # creates a new change queue for every change
+        if existing:
+            return DynamicChangeQueueContextManager(existing)
+        change_queue = model.ChangeQueue(self.pipeline)
+        change_queue.addProject(change.project)
+        self.pipeline.addQueue(change_queue)
+        self.log.debug("Dynamically created queue %s", change_queue)
+        return DynamicChangeQueueContextManager(change_queue)
+
+    def enqueueChangesAhead(self, change, quiet, ignore_requirements,
+                            change_queue):
+        ret = self.checkForChangesNeededBy(change, change_queue)
+        if ret in [True, False]:
+            return ret
+        self.log.debug("  Changes %s must be merged ahead of %s" %
+                       (ret, change))
+        for needed_change in ret:
+            # This differs from the dependent pipeline by enqueuing
+            # changes ahead as "not live", that is, not intended to
+            # have jobs run.  Also, pipeline requirements are always
+            # ignored (which is safe because the changes are not
+            # live).
+            r = self.addChange(needed_change, quiet=True,
+                               ignore_requirements=True,
+                               live=False, change_queue=change_queue)
+            if not r:
+                return False
+        return True
+
+    def checkForChangesNeededBy(self, change, change_queue):
+        if self.pipeline.ignore_dependencies:
+            return True
+        self.log.debug("Checking for changes needed by %s:" % change)
+        # Return true if okay to proceed enqueing this change,
+        # false if the change should not be enqueued.
+        if not hasattr(change, 'needs_changes'):
+            self.log.debug("  Changeish does not support dependencies")
+            return True
+        if not change.needs_changes:
+            self.log.debug("  No changes needed")
+            return True
+        changes_needed = []
+        for needed_change in change.needs_changes:
+            self.log.debug("  Change %s needs change %s:" % (
+                change, needed_change))
+            if needed_change.is_merged:
+                self.log.debug("  Needed change is merged")
+                continue
+            if self.isChangeAlreadyInQueue(needed_change, change_queue):
+                self.log.debug("  Needed change is already ahead in the queue")
+                continue
+            self.log.debug("  Change %s is needed" % needed_change)
+            if needed_change not in changes_needed:
+                changes_needed.append(needed_change)
+                continue
+            # This differs from the dependent pipeline check in not
+            # verifying that the dependent change is mergable.
+        if changes_needed:
+            return changes_needed
+        return True
+
+    def dequeueItem(self, item):
+        super(IndependentPipelineManager, self).dequeueItem(item)
+        # An independent pipeline manager dynamically removes empty
+        # queues
+        if not item.queue.queue:
+            self.pipeline.removeQueue(item.queue)
diff --git a/zuul/merger/client.py b/zuul/merger/client.py
index 9e8c243..990d33e 100644
--- a/zuul/merger/client.py
+++ b/zuul/merger/client.py
@@ -14,6 +14,7 @@
 
 import json
 import logging
+import threading
 from uuid import uuid4
 
 import gear
@@ -32,7 +33,7 @@
 
 class MergeGearmanClient(gear.Client):
     def __init__(self, merge_client):
-        super(MergeGearmanClient, self).__init__()
+        super(MergeGearmanClient, self).__init__('Zuul Merge Client')
         self.__merge_client = merge_client
 
     def handleWorkComplete(self, packet):
@@ -55,6 +56,18 @@
         self.__merge_client.onBuildCompleted(job)
 
 
+class MergeJob(gear.Job):
+    def __init__(self, *args, **kw):
+        super(MergeJob, self).__init__(*args, **kw)
+        self.__event = threading.Event()
+
+    def setComplete(self):
+        self.__event.set()
+
+    def wait(self, timeout=300):
+        return self.__event.wait(timeout)
+
+
 class MergeClient(object):
     log = logging.getLogger("zuul.MergeClient")
 
@@ -71,54 +84,65 @@
         self.gearman.addServer(server, port)
         self.log.debug("Waiting for gearman")
         self.gearman.waitForServer()
-        self.build_sets = {}
+        self.jobs = set()
 
     def stop(self):
         self.gearman.shutdown()
 
     def areMergesOutstanding(self):
-        if self.build_sets:
+        if self.jobs:
             return True
         return False
 
     def submitJob(self, name, data, build_set,
                   precedence=zuul.model.PRECEDENCE_NORMAL):
         uuid = str(uuid4().hex)
-        job = gear.Job(name,
+        job = MergeJob(name,
                        json.dumps(data),
                        unique=uuid)
+        job.build_set = build_set
         self.log.debug("Submitting job %s with data %s" % (job, data))
-        self.build_sets[uuid] = build_set
+        self.jobs.add(job)
         self.gearman.submitJob(job, precedence=precedence,
                                timeout=300)
+        return job
 
-    def mergeChanges(self, items, build_set,
+    def mergeChanges(self, items, build_set, files=None,
                      precedence=zuul.model.PRECEDENCE_NORMAL):
-        data = dict(items=items)
+        data = dict(items=items,
+                    files=files)
         self.submitJob('merger:merge', data, build_set, precedence)
 
-    def updateRepo(self, project, connection_name, url, build_set,
+    def updateRepo(self, project, url, build_set,
                    precedence=zuul.model.PRECEDENCE_NORMAL):
         data = dict(project=project,
-                    connection_name=connection_name,
                     url=url)
         self.submitJob('merger:update', data, build_set, precedence)
 
+    def getFiles(self, project, url, branch, files,
+                 precedence=zuul.model.PRECEDENCE_HIGH):
+        data = dict(project=project,
+                    url=url,
+                    branch=branch,
+                    files=files)
+        job = self.submitJob('merger:cat', data, None, precedence)
+        return job
+
     def onBuildCompleted(self, job):
-        build_set = self.build_sets.get(job.unique)
-        if build_set:
-            data = getJobData(job)
-            zuul_url = data.get('zuul_url')
-            merged = data.get('merged', False)
-            updated = data.get('updated', False)
-            commit = data.get('commit')
-            self.log.info("Merge %s complete, merged: %s, updated: %s, "
-                          "commit: %s" %
-                          (job, merged, updated, build_set.commit))
-            self.sched.onMergeCompleted(build_set, zuul_url,
-                                        merged, updated, commit)
-            # The test suite expects the build_set to be removed from
-            # the internal dict after the wake flag is set.
-            del self.build_sets[job.unique]
-        else:
-            self.log.error("Unable to find build set for uuid %s" % job.unique)
+        data = getJobData(job)
+        zuul_url = data.get('zuul_url')
+        merged = data.get('merged', False)
+        updated = data.get('updated', False)
+        commit = data.get('commit')
+        files = data.get('files', {})
+        job.files = files
+        self.log.info("Merge %s complete, merged: %s, updated: %s, "
+                      "commit: %s" %
+                      (job, merged, updated, commit))
+        job.setComplete()
+        if job.build_set:
+            self.sched.onMergeCompleted(job.build_set, zuul_url,
+                                        merged, updated, commit, files)
+        # The test suite expects the job to be removed from the
+        # internal account after the wake flag is set.
+        self.jobs.remove(job)
diff --git a/zuul/merger/merger.py b/zuul/merger/merger.py
index a974e9c..d07a95b 100644
--- a/zuul/merger/merger.py
+++ b/zuul/merger/merger.py
@@ -77,12 +77,8 @@
         return self._initialized
 
     def createRepoObject(self):
-        try:
-            self._ensure_cloned()
-            repo = git.Repo(self.local_path)
-        except:
-            self.log.exception("Unable to initialize repo for %s" %
-                               self.local_path)
+        self._ensure_cloned()
+        repo = git.Repo(self.local_path)
         return repo
 
     def reset(self):
@@ -195,6 +191,20 @@
             origin.fetch()
         origin.fetch(tags=True)
 
+    def getFiles(self, files, branch=None, commit=None):
+        ret = {}
+        repo = self.createRepoObject()
+        if branch:
+            tree = repo.heads[branch].commit.tree
+        else:
+            tree = repo.commit(commit).tree
+        for fn in files:
+            if fn in tree:
+                ret[fn] = tree[fn].data_stream.read()
+            else:
+                ret[fn] = None
+        return ret
+
 
 class Merger(object):
     log = logging.getLogger("zuul.Merger")
@@ -204,24 +214,17 @@
         self.working_root = working_root
         if not os.path.exists(working_root):
             os.makedirs(working_root)
-        self._makeSSHWrappers(working_root, connections)
+        self.connections = connections
         self.email = email
         self.username = username
 
-    def _makeSSHWrappers(self, working_root, connections):
-        for connection_name, connection in connections.items():
-            sshkey = connection.connection_config.get('sshkey')
-            if sshkey:
-                self._makeSSHWrapper(sshkey, working_root, connection_name)
-
-    def _makeSSHWrapper(self, key, merge_root, connection_name='default'):
-        wrapper_name = '.ssh_wrapper_%s' % connection_name
-        name = os.path.join(merge_root, wrapper_name)
-        fd = open(name, 'w')
-        fd.write('#!/bin/bash\n')
-        fd.write('ssh -i %s $@\n' % key)
-        fd.close()
-        os.chmod(name, 0o755)
+    def _get_ssh_cmd(self, connection_name):
+        sshkey = self.connections.connections.get(connection_name).\
+            connection_config.get('sshkey')
+        if sshkey:
+            return 'ssh -i %s' % sshkey
+        else:
+            return None
 
     def _setGitSsh(self, connection_name):
         wrapper_name = '.ssh_wrapper_%s' % connection_name
@@ -250,8 +253,11 @@
                             " without a url" % (project,))
         return self.addProject(project, url)
 
-    def updateRepo(self, project, connection_name, url):
-        self._setGitSsh(connection_name)
+    def updateRepo(self, project, url):
+        # TODOv3(jhesketh): Reimplement
+        # da90a50b794f18f74de0e2c7ec3210abf79dda24 after merge..
+        # Likely we'll handle connection context per projects differently.
+        # self._setGitSsh()
         repo = self.getRepo(project, url)
         try:
             self.log.info("Updating local repository %s", project)
@@ -259,6 +265,16 @@
         except Exception:
             self.log.exception("Unable to update %s", project)
 
+    def checkoutBranch(self, project, url, branch):
+        repo = self.getRepo(project, url)
+        if repo.hasBranch(branch):
+            self.log.info("Checking out branch %s of %s" % (branch, project))
+            head = repo.getBranchHead(branch)
+            repo.checkout(head)
+        else:
+            raise Exception("Project %s does not have branch %s" %
+                            (project, branch))
+
     def _mergeChange(self, item, ref):
         repo = self.getRepo(item['project'], item['url'])
         try:
@@ -292,18 +308,22 @@
         self.log.debug("Processing refspec %s for project %s / %s ref %s" %
                        (item['refspec'], item['project'], item['branch'],
                         item['ref']))
-        self._setGitSsh(item['connection_name'])
         repo = self.getRepo(item['project'], item['url'])
         key = (item['project'], item['branch'])
+
         # See if we have a commit for this change already in this repo
         zuul_ref = item['branch'] + '/' + item['ref']
-        commit = repo.getCommitFromRef(zuul_ref)
-        if commit:
-            self.log.debug("Found commit %s for ref %s" % (commit, zuul_ref))
-            # Store this as the most recent commit for this
-            # project-branch
-            recent[key] = commit
-            return commit
+        with repo.createRepoObject().git.custom_environment(
+            GIT_SSH_COMMAND=self._get_ssh_cmd(item['connection_name'])):
+            commit = repo.getCommitFromRef(zuul_ref)
+            if commit:
+                self.log.debug(
+                    "Found commit %s for ref %s" % (commit, zuul_ref))
+                # Store this as the most recent commit for this
+                # project-branch
+                recent[key] = commit
+                return commit
+
         self.log.debug("Unable to find commit for ref %s" % (zuul_ref,))
         # We need to merge the change
         # Get the most recent commit for this project-branch
@@ -321,28 +341,31 @@
         else:
             self.log.debug("Found base commit %s for %s" % (base, key,))
         # Merge the change
-        commit = self._mergeChange(item, base)
-        if not commit:
-            return None
-        # Store this commit as the most recent for this project-branch
-        recent[key] = commit
-        # Set the Zuul ref for this item to point to the most recent
-        # commits of each project-branch
-        for key, mrc in recent.items():
-            project, branch = key
-            try:
-                repo = self.getRepo(project, None)
-                zuul_ref = branch + '/' + item['ref']
-                repo.createZuulRef(zuul_ref, mrc)
-            except Exception:
-                self.log.exception("Unable to set zuul ref %s for "
-                                   "item %s" % (zuul_ref, item))
+        with repo.createRepoObject().git.custom_environment(
+            GIT_SSH_COMMAND=self._get_ssh_cmd(item['connection_name'])):
+            commit = self._mergeChange(item, base)
+            if not commit:
                 return None
-        return commit
+            # Store this commit as the most recent for this project-branch
+            recent[key] = commit
+            # Set the Zuul ref for this item to point to the most recent
+            # commits of each project-branch
+            for key, mrc in recent.items():
+                project, branch = key
+                try:
+                    repo = self.getRepo(project, None)
+                    zuul_ref = branch + '/' + item['ref']
+                    repo.createZuulRef(zuul_ref, mrc)
+                except Exception:
+                    self.log.exception("Unable to set zuul ref %s for "
+                                       "item %s" % (zuul_ref, item))
+                    return None
+            return commit
 
-    def mergeChanges(self, items):
+    def mergeChanges(self, items, files=None):
         recent = {}
         commit = None
+        read_files = []
         for item in items:
             if item.get("number") and item.get("patchset"):
                 self.log.debug("Merging for change %s,%s." %
@@ -353,4 +376,16 @@
             commit = self._mergeItem(item, recent)
             if not commit:
                 return None
+            if files:
+                repo = self.getRepo(item['project'], item['url'])
+                repo_files = repo.getFiles(files, commit=commit)
+                read_files.append(dict(project=item['project'],
+                                       branch=item['branch'],
+                                       files=repo_files))
+        if files:
+            return commit.hexsha, read_files
         return commit.hexsha
+
+    def getFiles(self, project, url, branch, files):
+        repo = self.getRepo(project, url)
+        return repo.getFiles(files, branch=branch)
diff --git a/zuul/merger/server.py b/zuul/merger/server.py
index b1921d9..cee011a 100644
--- a/zuul/merger/server.py
+++ b/zuul/merger/server.py
@@ -32,7 +32,7 @@
         if self.config.has_option('merger', 'git_dir'):
             merge_root = self.config.get('merger', 'git_dir')
         else:
-            merge_root = '/var/lib/zuul/git'
+            merge_root = '/var/lib/zuul/merger-git'
 
         if self.config.has_option('merger', 'git_user_email'):
             merge_email = self.config.get('merger', 'git_user_email')
@@ -68,6 +68,7 @@
     def register(self):
         self.worker.registerFunction("merger:merge")
         self.worker.registerFunction("merger:update")
+        self.worker.registerFunction("merger:cat")
 
     def stop(self):
         self.log.debug("Stopping")
@@ -90,6 +91,9 @@
                     elif job.name == 'merger:update':
                         self.log.debug("Got update job: %s" % job.unique)
                         self.update(job)
+                    elif job.name == 'merger:cat':
+                        self.log.debug("Got cat job: %s" % job.unique)
+                        self.cat(job)
                     else:
                         self.log.error("Unable to handle job %s" % job.name)
                         job.sendWorkFail()
@@ -101,17 +105,29 @@
 
     def merge(self, job):
         args = json.loads(job.arguments)
-        commit = self.merger.mergeChanges(args['items'])
-        result = dict(merged=(commit is not None),
-                      commit=commit,
+        ret = self.merger.mergeChanges(args['items'], args.get('files'))
+        result = dict(merged=(ret is not None),
                       zuul_url=self.zuul_url)
+        if args.get('files'):
+            result['commit'], result['files'] = ret
+        else:
+            result['commit'] = ret
         job.sendWorkComplete(json.dumps(result))
 
     def update(self, job):
         args = json.loads(job.arguments)
         self.merger.updateRepo(args['project'],
-                               args['connection_name'],
                                args['url'])
         result = dict(updated=True,
                       zuul_url=self.zuul_url)
         job.sendWorkComplete(json.dumps(result))
+
+    def cat(self, job):
+        args = json.loads(job.arguments)
+        self.merger.updateRepo(args['project'], args['url'])
+        files = self.merger.getFiles(args['project'], args['url'],
+                                     args['branch'], args['files'])
+        result = dict(updated=True,
+                      files=files,
+                      zuul_url=self.zuul_url)
+        job.sendWorkComplete(json.dumps(result))
diff --git a/zuul/model.py b/zuul/model.py
index b24a06b..19931ea 100644
--- a/zuul/model.py
+++ b/zuul/model.py
@@ -12,6 +12,7 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
+import abc
 import copy
 import os
 import re
@@ -20,6 +21,8 @@
 from uuid import uuid4
 import extras
 
+import six
+
 OrderedDict = extras.try_imports(['collections.OrderedDict',
                                   'ordereddict.OrderedDict'])
 
@@ -47,6 +50,32 @@
     'high': PRECEDENCE_HIGH,
 }
 
+# Request states
+STATE_REQUESTED = 'requested'
+STATE_PENDING = 'pending'
+STATE_FULFILLED = 'fulfilled'
+STATE_FAILED = 'failed'
+REQUEST_STATES = set([STATE_REQUESTED,
+                      STATE_PENDING,
+                      STATE_FULFILLED,
+                      STATE_FAILED])
+
+# Node states
+STATE_BUILDING = 'building'
+STATE_TESTING = 'testing'
+STATE_READY = 'ready'
+STATE_IN_USE = 'in-use'
+STATE_USED = 'used'
+STATE_HOLD = 'hold'
+STATE_DELETING = 'deleting'
+NODE_STATES = set([STATE_BUILDING,
+                   STATE_TESTING,
+                   STATE_READY,
+                   STATE_IN_USE,
+                   STATE_USED,
+                   STATE_HOLD,
+                   STATE_DELETING])
+
 
 def time_to_seconds(s):
     if s.endswith('s'):
@@ -68,14 +97,30 @@
 
 
 class Pipeline(object):
-    """A top-level pipeline such as check, gate, post, etc."""
-    def __init__(self, name):
+    """A configuration that ties triggers, reporters, managers and sources.
+
+    Source
+        Where changes should come from. It is a named connection to
+        an external service defined in zuul.conf
+
+    Trigger
+        A description of which events should be processed
+
+    Manager
+        Responsible for enqueing and dequeing Changes
+
+    Reporter
+        Communicates success and failure results somewhere
+    """
+    def __init__(self, name, layout):
         self.name = name
+        self.layout = layout
         self.description = None
         self.failure_message = None
         self.merge_failure_message = None
         self.success_message = None
         self.footer_message = None
+        self.start_message = None
         self.dequeue_on_new_patchset = True
         self.ignore_dependencies = False
         self.job_trees = {}  # project -> JobTree
@@ -83,6 +128,7 @@
         self.queues = []
         self.precedence = PRECEDENCE_NORMAL
         self.source = None
+        self.triggers = []
         self.start_actions = []
         self.success_actions = []
         self.failure_actions = []
@@ -98,17 +144,22 @@
         self.window_decrease_type = None
         self.window_decrease_factor = None
 
+    @property
+    def actions(self):
+        return (
+            self.start_actions +
+            self.success_actions +
+            self.failure_actions +
+            self.merge_failure_actions +
+            self.disabled_actions
+        )
+
     def __repr__(self):
         return '<Pipeline %s>' % self.name
 
     def setManager(self, manager):
         self.manager = manager
 
-    def addProject(self, project):
-        job_tree = JobTree(None)  # Null job == job tree root
-        self.job_trees[project] = job_tree
-        return job_tree
-
     def getProjects(self):
         # cmp is not in python3, applied idiom from
         # http://python-future.org/compatible_idioms.html#cmp
@@ -132,134 +183,6 @@
         tree = self.job_trees.get(project)
         return tree
 
-    def getJobs(self, item):
-        if not item.live:
-            return []
-        tree = self.getJobTree(item.change.project)
-        if not tree:
-            return []
-        return item.change.filterJobs(tree.getJobs())
-
-    def _findJobsToRun(self, job_trees, item, mutex):
-        torun = []
-        if item.item_ahead:
-            # Only run jobs if any 'hold' jobs on the change ahead
-            # have completed successfully.
-            if self.isHoldingFollowingChanges(item.item_ahead):
-                return []
-        for tree in job_trees:
-            job = tree.job
-            result = None
-            if job:
-                if not job.changeMatches(item.change):
-                    continue
-                build = item.current_build_set.getBuild(job.name)
-                if build:
-                    result = build.result
-                else:
-                    # There is no build for the root of this job tree,
-                    # so we should run it.
-                    if mutex.acquire(item, job):
-                        # If this job needs a mutex, either acquire it or make
-                        # sure that we have it before running the job.
-                        torun.append(job)
-            # If there is no job, this is a null job tree, and we should
-            # run all of its jobs.
-            if result == 'SUCCESS' or not job:
-                torun.extend(self._findJobsToRun(tree.job_trees, item, mutex))
-        return torun
-
-    def findJobsToRun(self, item, mutex):
-        if not item.live:
-            return []
-        tree = self.getJobTree(item.change.project)
-        if not tree:
-            return []
-        return self._findJobsToRun(tree.job_trees, item, mutex)
-
-    def haveAllJobsStarted(self, item):
-        for job in self.getJobs(item):
-            build = item.current_build_set.getBuild(job.name)
-            if not build or not build.start_time:
-                return False
-        return True
-
-    def areAllJobsComplete(self, item):
-        for job in self.getJobs(item):
-            build = item.current_build_set.getBuild(job.name)
-            if not build or not build.result:
-                return False
-        return True
-
-    def didAllJobsSucceed(self, item):
-        for job in self.getJobs(item):
-            if not job.voting:
-                continue
-            build = item.current_build_set.getBuild(job.name)
-            if not build:
-                return False
-            if build.result != 'SUCCESS':
-                return False
-        return True
-
-    def didMergerSucceed(self, item):
-        if item.current_build_set.unable_to_merge:
-            return False
-        return True
-
-    def didAnyJobFail(self, item):
-        for job in self.getJobs(item):
-            if not job.voting:
-                continue
-            build = item.current_build_set.getBuild(job.name)
-            if build and build.result and (build.result != 'SUCCESS'):
-                return True
-        return False
-
-    def isHoldingFollowingChanges(self, item):
-        if not item.live:
-            return False
-        for job in self.getJobs(item):
-            if not job.hold_following_changes:
-                continue
-            build = item.current_build_set.getBuild(job.name)
-            if not build:
-                return True
-            if build.result != 'SUCCESS':
-                return True
-
-        if not item.item_ahead:
-            return False
-        return self.isHoldingFollowingChanges(item.item_ahead)
-
-    def setResult(self, item, build):
-        if build.retry:
-            item.removeBuild(build)
-        elif build.result != 'SUCCESS':
-            # Get a JobTree from a Job so we can find only its dependent jobs
-            root = self.getJobTree(item.change.project)
-            tree = root.getJobTreeForJob(build.job)
-            for job in tree.getJobs():
-                fakebuild = Build(job, None)
-                fakebuild.result = 'SKIPPED'
-                item.addBuild(fakebuild)
-
-    def setUnableToMerge(self, item):
-        item.current_build_set.unable_to_merge = True
-        root = self.getJobTree(item.change.project)
-        for job in root.getJobs():
-            fakebuild = Build(job, None)
-            fakebuild.result = 'SKIPPED'
-            item.addBuild(fakebuild)
-
-    def setDequeuedNeedingChange(self, item):
-        item.dequeued_needing_change = True
-        root = self.getJobTree(item.change.project)
-        for job in root.getJobs():
-            fakebuild = Build(job, None)
-            fakebuild.result = 'SKIPPED'
-            item.addBuild(fakebuild)
-
     def getChangesInQueue(self):
         changes = []
         for shared_queue in self.queues:
@@ -302,18 +225,31 @@
 
 
 class ChangeQueue(object):
-    """DependentPipelines have multiple parallel queues shared by
-    different projects; this is one of them.  For instance, there may
-    a queue shared by interrelated projects foo and bar, and a second
-    queue for independent project baz.  Pipelines have one or more
-    ChangeQueues."""
+    """A ChangeQueue contains Changes to be processed related projects.
+
+    A Pipeline with a DependentPipelineManager has multiple parallel
+    ChangeQueues shared by different projects. For instance, there may a
+    ChangeQueue shared by interrelated projects foo and bar, and a second queue
+    for independent project baz.
+
+    A Pipeline with an IndependentPipelineManager puts every Change into its
+    own ChangeQueue
+
+    The ChangeQueue Window is inspired by TCP windows and controlls how many
+    Changes in a given ChangeQueue will be considered active and ready to
+    be processed. If a Change succeeds, the Window is increased by
+    `window_increase_factor`. If a Change fails, the Window is decreased by
+    `window_decrease_factor`.
+    """
     def __init__(self, pipeline, window=0, window_floor=1,
                  window_increase_type='linear', window_increase_factor=1,
-                 window_decrease_type='exponential', window_decrease_factor=2):
+                 window_decrease_type='exponential', window_decrease_factor=2,
+                 name=None):
         self.pipeline = pipeline
-        self.name = ''
-        self.assigned_name = None
-        self.generated_name = None
+        if name:
+            self.name = name
+        else:
+            self.name = ''
         self.projects = []
         self._jobs = set()
         self.queue = []
@@ -333,21 +269,9 @@
     def addProject(self, project):
         if project not in self.projects:
             self.projects.append(project)
-            self._jobs |= set(self.pipeline.getJobTree(project).getJobs())
 
-            names = [x.name for x in self.projects]
-            names.sort()
-            self.generated_name = ', '.join(names)
-
-            for job in self._jobs:
-                if job.queue_name:
-                    if (self.assigned_name and
-                            job.queue_name != self.assigned_name):
-                        raise Exception("More than one name assigned to "
-                                        "change queue: %s != %s" %
-                                        (self.assigned_name, job.queue_name))
-                    self.assigned_name = job.queue_name
-            self.name = self.assigned_name or self.generated_name
+            if not self.name:
+                self.name = project.name
 
     def enqueueChange(self, change):
         item = QueueItem(self, change)
@@ -425,13 +349,23 @@
 
 
 class Project(object):
-    def __init__(self, name, foreign=False):
+    """A Project represents a git repository such as openstack/nova."""
+
+    # NOTE: Projects should only be instantiated via a Source object
+    # so that they are associated with and cached by their Connection.
+    # This makes a Project instance a unique identifier for a given
+    # project from a given source.
+
+    def __init__(self, name, connection_name, foreign=False):
         self.name = name
-        self.merge_mode = MERGER_MERGE_RESOLVE
+        self.connection_name = connection_name
         # foreign projects are those referenced in dependencies
         # of layout projects, this should matter
         # when deciding whether to enqueue their changes
+        # TODOv3 (jeblair): re-add support for foreign projects if needed
         self.foreign = foreign
+        self.unparsed_config = None
+        self.unparsed_branch_config = {}  # branch -> UnparsedTenantConfig
 
     def __str__(self):
         return self.name
@@ -440,113 +374,504 @@
         return '<Project %s>' % (self.name)
 
 
-class Job(object):
-    def __init__(self, name):
-        # If you add attributes here, be sure to add them to the copy method.
+class Node(object):
+    """A single node for use by a job.
+
+    This may represent a request for a node, or an actual node
+    provided by Nodepool.
+    """
+
+    def __init__(self, name, image):
         self.name = name
-        self.queue_name = None
-        self.failure_message = None
-        self.success_message = None
-        self.failure_pattern = None
-        self.success_pattern = None
-        self.parameter_function = None
-        self.tags = set()
-        self.mutex = None
-        # A metajob should only supply values for attributes that have
-        # been explicitly provided, so avoid setting boolean defaults.
-        if self.is_metajob:
-            self.hold_following_changes = None
-            self.voting = None
+        self.image = image
+        self.id = None
+        self.lock = None
+        # Attributes from Nodepool
+        self._state = 'unknown'
+        self.state_time = time.time()
+        self.public_ipv4 = None
+        self.private_ipv4 = None
+        self.public_ipv6 = None
+        self._keys = []
+
+    @property
+    def state(self):
+        return self._state
+
+    @state.setter
+    def state(self, value):
+        if value not in NODE_STATES:
+            raise TypeError("'%s' is not a valid state" % value)
+        self._state = value
+        self.state_time = time.time()
+
+    def __repr__(self):
+        return '<Node %s %s:%s>' % (self.id, self.name, self.image)
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def __eq__(self, other):
+        if not isinstance(other, Node):
+            return False
+        return (self.name == other.name and
+                self.image == other.image and
+                self.id == other.id)
+
+    def toDict(self):
+        d = {}
+        d['state'] = self.state
+        for k in self._keys:
+            d[k] = getattr(self, k)
+        return d
+
+    def updateFromDict(self, data):
+        self._state = data['state']
+        keys = []
+        for k, v in data.items():
+            if k == 'state':
+                continue
+            keys.append(k)
+            setattr(self, k, v)
+        self._keys = keys
+
+
+class NodeSet(object):
+    """A set of nodes.
+
+    In configuration, NodeSets are attributes of Jobs indicating that
+    a Job requires nodes matching this description.
+
+    They may appear as top-level configuration objects and be named,
+    or they may appears anonymously in in-line job definitions.
+    """
+
+    def __init__(self, name=None):
+        self.name = name or ''
+        self.nodes = OrderedDict()
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def __eq__(self, other):
+        if not isinstance(other, NodeSet):
+            return False
+        return (self.name == other.name and
+                self.nodes == other.nodes)
+
+    def copy(self):
+        n = NodeSet(self.name)
+        for name, node in self.nodes.items():
+            n.addNode(Node(node.name, node.image))
+        return n
+
+    def addNode(self, node):
+        if node.name in self.nodes:
+            raise Exception("Duplicate node in %s" % (self,))
+        self.nodes[node.name] = node
+
+    def getNodes(self):
+        return self.nodes.values()
+
+    def __repr__(self):
+        if self.name:
+            name = self.name + ' '
         else:
-            self.hold_following_changes = False
-            self.voting = True
-        self.branches = []
-        self._branches = []
-        self.files = []
-        self._files = []
-        self.skip_if_matcher = None
-        self.swift = {}
-        # Number of attempts to launch a job before giving up.
-        self.attempts = 3
+            name = ''
+        return '<NodeSet %s%s>' % (name, self.nodes)
+
+
+class NodeRequest(object):
+    """A request for a set of nodes."""
+
+    def __init__(self, build_set, job, nodeset):
+        self.build_set = build_set
+        self.job = job
+        self.nodeset = nodeset
+        self._state = STATE_REQUESTED
+        self.state_time = time.time()
+        self.stat = None
+        self.uid = uuid4().hex
+        self.id = None
+        # Zuul internal failure flag (not stored in ZK so it's not
+        # overwritten).
+        self.failed = False
+
+    @property
+    def fulfilled(self):
+        return (self._state == STATE_FULFILLED) and not self.failed
+
+    @property
+    def state(self):
+        return self._state
+
+    @state.setter
+    def state(self, value):
+        if value not in REQUEST_STATES:
+            raise TypeError("'%s' is not a valid state" % value)
+        self._state = value
+        self.state_time = time.time()
+
+    def __repr__(self):
+        return '<NodeRequest %s %s>' % (self.id, self.nodeset)
+
+    def toDict(self):
+        d = {}
+        nodes = [n.image for n in self.nodeset.getNodes()]
+        d['node_types'] = nodes
+        d['requestor'] = 'zuul'  # TODOv3(jeblair): better descriptor
+        d['state'] = self.state
+        d['state_time'] = self.state_time
+        return d
+
+    def updateFromDict(self, data):
+        self._state = data['state']
+        self.state_time = data['state_time']
+
+
+class SourceContext(object):
+    """A reference to the branch of a project in configuration.
+
+    Jobs and playbooks reference this to keep track of where they
+    originate."""
+
+    def __init__(self, project, branch, trusted):
+        self.project = project
+        self.branch = branch
+        self.trusted = trusted
+
+    def __repr__(self):
+        return '<SourceContext %s:%s trusted:%s>' % (self.project,
+                                                     self.branch,
+                                                     self.trusted)
+
+    def __deepcopy__(self, memo):
+        return self.copy()
+
+    def copy(self):
+        return self.__class__(self.project, self.branch, self.trusted)
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def __eq__(self, other):
+        if not isinstance(other, SourceContext):
+            return False
+        return (self.project == other.project and
+                self.branch == other.branch and
+                self.trusted == other.trusted)
+
+
+class PlaybookContext(object):
+
+    """A reference to a playbook in the context of a project.
+
+    Jobs refer to objects of this class for their main, pre, and post
+    playbooks so that we can keep track of which repos and security
+    contexts are needed in order to run them."""
+
+    def __init__(self, source_context, path):
+        self.source_context = source_context
+        self.path = path
+
+    def __repr__(self):
+        return '<PlaybookContext %s %s>' % (self.source_context,
+                                            self.path)
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def __eq__(self, other):
+        if not isinstance(other, PlaybookContext):
+            return False
+        return (self.source_context == other.source_context and
+                self.path == other.path)
+
+    def toDict(self):
+        # Render to a dict to use in passing json to the launcher
+        return dict(
+            connection=self.source_context.project.connection_name,
+            project=self.source_context.project.name,
+            branch=self.source_context.branch,
+            trusted=self.source_context.trusted,
+            path=self.path)
+
+
+@six.add_metaclass(abc.ABCMeta)
+class Role(object):
+    """A reference to an ansible role."""
+
+    def __init__(self, target_name):
+        self.target_name = target_name
+
+    @abc.abstractmethod
+    def __repr__(self):
+        pass
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    @abc.abstractmethod
+    def __eq__(self, other):
+        if not isinstance(other, Role):
+            return False
+        return (self.target_name == other.target_name)
+
+    @abc.abstractmethod
+    def toDict(self):
+        # Render to a dict to use in passing json to the launcher
+        return dict(target_name=self.target_name)
+
+
+class ZuulRole(Role):
+    """A reference to an ansible role in a Zuul project."""
+
+    def __init__(self, target_name, connection_name, project_name, trusted):
+        super(ZuulRole, self).__init__(target_name)
+        self.connection_name = connection_name
+        self.project_name = project_name
+        self.trusted = trusted
+
+    def __repr__(self):
+        return '<ZuulRole %s %s>' % (self.project_name, self.target_name)
+
+    def __eq__(self, other):
+        if not isinstance(other, ZuulRole):
+            return False
+        return (super(ZuulRole, self).__eq__(other) and
+                self.connection_name == other.connection_name,
+                self.project_name == other.project_name,
+                self.trusted == other.trusted)
+
+    def toDict(self):
+        # Render to a dict to use in passing json to the launcher
+        d = super(ZuulRole, self).toDict()
+        d['type'] = 'zuul'
+        d['connection'] = self.connection_name
+        d['project'] = self.project_name
+        d['trusted'] = self.trusted
+        return d
+
+
+class Job(object):
+
+    """A Job represents the defintion of actions to perform.
+
+    NB: Do not modify attributes of this class, set them directly
+    (e.g., "job.run = ..." rather than "job.run.append(...)").
+    """
+
+    def __init__(self, name):
+        # These attributes may override even the final form of a job
+        # in the context of a project-pipeline.  They can not affect
+        # the execution of the job, but only whether the job is run
+        # and how it is reported.
+        self.context_attributes = dict(
+            voting=True,
+            hold_following_changes=False,
+            failure_message=None,
+            success_message=None,
+            failure_url=None,
+            success_url=None,
+            # Matchers.  These are separate so they can be individually
+            # overidden.
+            branch_matcher=None,
+            file_matcher=None,
+            irrelevant_file_matcher=None,  # skip-if
+            tags=frozenset(),
+        )
+
+        # These attributes affect how the job is actually run and more
+        # care must be taken when overriding them.  If a job is
+        # declared "final", these may not be overriden in a
+        # project-pipeline.
+        self.execution_attributes = dict(
+            timeout=None,
+            variables={},
+            nodeset=NodeSet(),
+            auth={},
+            workspace=None,
+            pre_run=(),
+            post_run=(),
+            run=(),
+            implied_run=(),
+            mutex=None,
+            attempts=3,
+            final=False,
+            roles=frozenset(),
+            repos=frozenset(),
+        )
+
+        # These are generally internal attributes which are not
+        # accessible via configuration.
+        self.other_attributes = dict(
+            name=None,
+            source_context=None,
+            inheritance_path=(),
+        )
+
+        self.inheritable_attributes = {}
+        self.inheritable_attributes.update(self.context_attributes)
+        self.inheritable_attributes.update(self.execution_attributes)
+        self.attributes = {}
+        self.attributes.update(self.inheritable_attributes)
+        self.attributes.update(self.other_attributes)
+
+        self.name = name
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def __eq__(self, other):
+        # Compare the name and all inheritable attributes to determine
+        # whether two jobs with the same name are identically
+        # configured.  Useful upon reconfiguration.
+        if not isinstance(other, Job):
+            return False
+        if self.name != other.name:
+            return False
+        for k, v in self.attributes.items():
+            if getattr(self, k) != getattr(other, k):
+                return False
+        return True
 
     def __str__(self):
         return self.name
 
     def __repr__(self):
-        return '<Job %s>' % (self.name)
+        return '<Job %s branches: %s source: %s>' % (self.name,
+                                                     self.branch_matcher,
+                                                     self.source_context)
 
-    @property
-    def is_metajob(self):
-        return self.name.startswith('^')
+    def __getattr__(self, name):
+        v = self.__dict__.get(name)
+        if v is None:
+            return copy.deepcopy(self.attributes[name])
+        return v
 
-    def copy(self, other):
-        if other.failure_message:
-            self.failure_message = other.failure_message
-        if other.success_message:
-            self.success_message = other.success_message
-        if other.failure_pattern:
-            self.failure_pattern = other.failure_pattern
-        if other.success_pattern:
-            self.success_pattern = other.success_pattern
-        if other.parameter_function:
-            self.parameter_function = other.parameter_function
-        if other.branches:
-            self.branches = other.branches[:]
-            self._branches = other._branches[:]
-        if other.files:
-            self.files = other.files[:]
-            self._files = other._files[:]
-        if other.skip_if_matcher:
-            self.skip_if_matcher = other.skip_if_matcher.copy()
-        if other.swift:
-            self.swift.update(other.swift)
-        if other.mutex:
-            self.mutex = other.mutex
-        # Tags are merged via a union rather than a destructive copy
-        # because they are intended to accumulate as metajobs are
-        # applied.
-        if other.tags:
+    def _get(self, name):
+        return self.__dict__.get(name)
+
+    def setRun(self):
+        if not self.run:
+            self.run = self.implied_run
+
+    def updateVariables(self, other_vars):
+        v = self.variables
+        Job._deepUpdate(v, other_vars)
+        self.variables = v
+
+    @staticmethod
+    def _deepUpdate(a, b):
+        # Merge nested dictionaries if possible, otherwise, overwrite
+        # the value in 'a' with the value in 'b'.
+        for k, bv in b.items():
+            av = a.get(k)
+            if isinstance(av, dict) and isinstance(bv, dict):
+                Job._deepUpdate(av, bv)
+            else:
+                a[k] = bv
+
+    def inheritFrom(self, other):
+        """Copy the inheritable attributes which have been set on the other
+        job to this job."""
+        if not isinstance(other, Job):
+            raise Exception("Job unable to inherit from %s" % (other,))
+
+        do_not_inherit = set()
+        if other.auth and not other.auth.get('inherit'):
+            do_not_inherit.add('auth')
+
+        # copy all attributes
+        for k in self.inheritable_attributes:
+            if (other._get(k) is not None and k not in do_not_inherit):
+                setattr(self, k, copy.deepcopy(getattr(other, k)))
+
+        msg = 'inherit from %s' % (repr(other),)
+        self.inheritance_path = other.inheritance_path + (msg,)
+
+    def copy(self):
+        job = Job(self.name)
+        for k in self.attributes:
+            if self._get(k) is not None:
+                setattr(job, k, copy.deepcopy(self._get(k)))
+        return job
+
+    def applyVariant(self, other):
+        """Copy the attributes which have been set on the other job to this
+        job."""
+
+        if not isinstance(other, Job):
+            raise Exception("Job unable to inherit from %s" % (other,))
+
+        for k in self.execution_attributes:
+            if (other._get(k) is not None and
+                k not in set(['final'])):
+                if self.final:
+                    raise Exception("Unable to modify final job %s attribute "
+                                    "%s=%s with variant %s" % (
+                                        repr(self), k, other._get(k),
+                                        repr(other)))
+                if k not in set(['pre_run', 'post_run', 'roles', 'variables']):
+                    setattr(self, k, copy.deepcopy(other._get(k)))
+
+        # Don't set final above so that we don't trip an error halfway
+        # through assignment.
+        if other.final != self.attributes['final']:
+            self.final = other.final
+
+        if other._get('pre_run') is not None:
+            self.pre_run = self.pre_run + other.pre_run
+        if other._get('post_run') is not None:
+            self.post_run = other.post_run + self.post_run
+        if other._get('roles') is not None:
+            self.roles = self.roles.union(other.roles)
+        if other._get('variables') is not None:
+            self.updateVariables(other.variables)
+
+        for k in self.context_attributes:
+            if (other._get(k) is not None and
+                k not in set(['tags'])):
+                setattr(self, k, copy.deepcopy(other._get(k)))
+
+        if other._get('tags') is not None:
             self.tags = self.tags.union(other.tags)
-        # Only non-None values should be copied for boolean attributes.
-        if other.hold_following_changes is not None:
-            self.hold_following_changes = other.hold_following_changes
-        if other.voting is not None:
-            self.voting = other.voting
+
+        msg = 'apply variant %s' % (repr(other),)
+        self.inheritance_path = self.inheritance_path + (msg,)
 
     def changeMatches(self, change):
-        matches_branch = False
-        for branch in self.branches:
-            if hasattr(change, 'branch') and branch.match(change.branch):
-                matches_branch = True
-            if hasattr(change, 'ref') and branch.match(change.ref):
-                matches_branch = True
-        if self.branches and not matches_branch:
+        if self.branch_matcher and not self.branch_matcher.matches(change):
             return False
 
-        matches_file = False
-        for f in self.files:
-            if hasattr(change, 'files'):
-                for cf in change.files:
-                    if f.match(cf):
-                        matches_file = True
-        if self.files and not matches_file:
+        if self.file_matcher and not self.file_matcher.matches(change):
             return False
 
-        if self.skip_if_matcher and self.skip_if_matcher.matches(change):
+        # NB: This is a negative match.
+        if (self.irrelevant_file_matcher and
+            self.irrelevant_file_matcher.matches(change)):
             return False
 
         return True
 
 
 class JobTree(object):
-    """ A JobTree represents an instance of one Job, and holds JobTrees
-    whose jobs should be run if that Job succeeds.  A root node of a
-    JobTree will have no associated Job. """
+    """A JobTree holds one or more Jobs to represent Job dependencies.
+
+    If Job foo should only execute if Job bar succeeds, then there will
+    be a JobTree for foo, which will contain a JobTree for bar. A JobTree
+    can hold more than one dependent JobTrees, such that jobs bar and bang
+    both depend on job foo being successful.
+
+    A root node of a JobTree will have no associated Job."""
 
     def __init__(self, job):
         self.job = job
         self.job_trees = []
 
+    def __repr__(self):
+        return '<JobTree %s %s>' % (self.job, self.job_trees)
+
     def addJob(self, job):
         if job not in [x.job for x in self.job_trees]:
             t = JobTree(job)
@@ -572,13 +897,27 @@
                 return ret
         return None
 
+    def inheritFrom(self, other):
+        if other.job:
+            if not self.job:
+                self.job = other.job.copy()
+            else:
+                self.job.applyVariant(other.job)
+        for other_tree in other.job_trees:
+            this_tree = self.getJobTreeForJob(other_tree.job)
+            if not this_tree:
+                this_tree = JobTree(None)
+                self.job_trees.append(this_tree)
+            this_tree.inheritFrom(other_tree)
+
 
 class Build(object):
+    """A Build is an instance of a single running Job."""
+
     def __init__(self, job, uuid):
         self.job = job
         self.uuid = uuid
         self.url = None
-        self.number = None
         self.result = None
         self.build_set = None
         self.launch_time = time.time()
@@ -599,7 +938,7 @@
 
 
 class Worker(object):
-    """A model of the worker running a job"""
+    """Information about the specific worker executing a Build."""
     def __init__(self):
         self.name = "Unknown"
         self.hostname = None
@@ -623,7 +962,34 @@
         return '<Worker %s>' % self.name
 
 
+class RepoFiles(object):
+    """RepoFiles holds config-file content for per-project job config."""
+    # When we ask a merger to prepare a future multiple-repo state and
+    # collect files so that we can dynamically load our configuration,
+    # this class provides easy access to that data.
+    def __init__(self):
+        self.projects = {}
+
+    def __repr__(self):
+        return '<RepoFiles %s>' % self.projects
+
+    def setFiles(self, items):
+        self.projects = {}
+        for item in items:
+            project = self.projects.setdefault(item['project'], {})
+            branch = project.setdefault(item['branch'], {})
+            branch.update(item['files'])
+
+    def getFile(self, project, branch, fn):
+        return self.projects.get(project, {}).get(branch, {}).get(fn)
+
+
 class BuildSet(object):
+    """Contains the Builds for a Change representing potential future state.
+
+    A BuildSet also holds the UUID used to produce the Zuul Ref that builders
+    check out.
+    """
     # Merge states:
     NEW = 1
     PENDING = 2
@@ -646,8 +1012,13 @@
         self.commit = None
         self.zuul_url = None
         self.unable_to_merge = False
+        self.config_error = None  # None or an error message string.
         self.failing_reasons = []
         self.merge_state = self.NEW
+        self.nodesets = {}  # job -> nodeset
+        self.node_requests = {}  # job -> reqs
+        self.files = RepoFiles()
+        self.layout = None
         self.tries = {}
 
     def __repr__(self):
@@ -690,12 +1061,46 @@
         keys.sort()
         return [self.builds.get(x) for x in keys]
 
+    def getJobNodeSet(self, job_name):
+        # Return None if not provisioned; empty NodeSet if no nodes
+        # required
+        return self.nodesets.get(job_name)
+
+    def removeJobNodeSet(self, job_name):
+        if job_name not in self.nodesets:
+            raise Exception("No job set for %s" % (job_name))
+        del self.nodesets[job_name]
+
+    def setJobNodeRequest(self, job_name, req):
+        if job_name in self.node_requests:
+            raise Exception("Prior node request for %s" % (job_name))
+        self.node_requests[job_name] = req
+
+    def getJobNodeRequest(self, job_name):
+        return self.node_requests.get(job_name)
+
+    def jobNodeRequestComplete(self, job_name, req, nodeset):
+        if job_name in self.nodesets:
+            raise Exception("Prior node request for %s" % (job_name))
+        self.nodesets[job_name] = nodeset
+        del self.node_requests[job_name]
+
     def getTries(self, job_name):
         return self.tries.get(job_name)
 
+    def getMergeMode(self, job_name):
+        if not self.layout or job_name not in self.layout.project_configs:
+            return MERGER_MERGE_RESOLVE
+        return self.layout.project_configs[job_name].merge_mode
+
 
 class QueueItem(object):
-    """A changish inside of a Pipeline queue"""
+    """Represents the position of a Change in a ChangeQueue.
+
+    All Changes are enqueued into ChangeQueue in a QueueItem. The QueueItem
+    holds the current `BuildSet` as well as all previous `BuildSets` that were
+    produced for this `QueueItem`.
+    """
 
     def __init__(self, queue, change):
         self.pipeline = queue.pipeline
@@ -712,6 +1117,8 @@
         self.reported = False
         self.active = False  # Whether an item is within an active window
         self.live = True  # Whether an item is intended to be processed at all
+        self.layout = None  # This item's shadow layout
+        self.job_tree = None
 
     def __repr__(self):
         if self.pipeline:
@@ -739,6 +1146,200 @@
     def setReportedResult(self, result):
         self.current_build_set.result = result
 
+    def freezeJobTree(self):
+        """Find or create actual matching jobs for this item's change and
+        store the resulting job tree."""
+        layout = self.current_build_set.layout
+        self.job_tree = layout.createJobTree(self)
+
+    def hasJobTree(self):
+        """Returns True if the item has a job tree."""
+        return self.job_tree is not None
+
+    def getJobs(self):
+        if not self.live or not self.job_tree:
+            return []
+        return self.job_tree.getJobs()
+
+    def haveAllJobsStarted(self):
+        if not self.hasJobTree():
+            return False
+        for job in self.getJobs():
+            build = self.current_build_set.getBuild(job.name)
+            if not build or not build.start_time:
+                return False
+        return True
+
+    def areAllJobsComplete(self):
+        if (self.current_build_set.config_error or
+            self.current_build_set.unable_to_merge):
+            return True
+        if not self.hasJobTree():
+            return False
+        for job in self.getJobs():
+            build = self.current_build_set.getBuild(job.name)
+            if not build or not build.result:
+                return False
+        return True
+
+    def didAllJobsSucceed(self):
+        if not self.hasJobTree():
+            return False
+        for job in self.getJobs():
+            if not job.voting:
+                continue
+            build = self.current_build_set.getBuild(job.name)
+            if not build:
+                return False
+            if build.result != 'SUCCESS':
+                return False
+        return True
+
+    def didAnyJobFail(self):
+        if not self.hasJobTree():
+            return False
+        for job in self.getJobs():
+            if not job.voting:
+                continue
+            build = self.current_build_set.getBuild(job.name)
+            if build and build.result and (build.result != 'SUCCESS'):
+                return True
+        return False
+
+    def didMergerFail(self):
+        return self.current_build_set.unable_to_merge
+
+    def getConfigError(self):
+        return self.current_build_set.config_error
+
+    def isHoldingFollowingChanges(self):
+        if not self.live:
+            return False
+        if not self.hasJobTree():
+            return False
+        for job in self.getJobs():
+            if not job.hold_following_changes:
+                continue
+            build = self.current_build_set.getBuild(job.name)
+            if not build:
+                return True
+            if build.result != 'SUCCESS':
+                return True
+
+        if not self.item_ahead:
+            return False
+        return self.item_ahead.isHoldingFollowingChanges()
+
+    def _findJobsToRun(self, job_trees, mutex):
+        torun = []
+        if self.item_ahead:
+            # Only run jobs if any 'hold' jobs on the change ahead
+            # have completed successfully.
+            if self.item_ahead.isHoldingFollowingChanges():
+                return []
+        for tree in job_trees:
+            job = tree.job
+            result = None
+            if job:
+                if not job.changeMatches(self.change):
+                    continue
+                build = self.current_build_set.getBuild(job.name)
+                if build:
+                    result = build.result
+                else:
+                    # There is no build for the root of this job tree,
+                    # so it has not run yet.
+                    nodeset = self.current_build_set.getJobNodeSet(job.name)
+                    if nodeset is None:
+                        # The nodes for this job are not ready, skip
+                        # it for now.
+                        continue
+                    if mutex.acquire(self, job):
+                        # If this job needs a mutex, either acquire it or make
+                        # sure that we have it before running the job.
+                        torun.append(job)
+            # If there is no job, this is a null job tree, and we should
+            # run all of its jobs.
+            if result == 'SUCCESS' or not job:
+                torun.extend(self._findJobsToRun(tree.job_trees, mutex))
+        return torun
+
+    def findJobsToRun(self, mutex):
+        if not self.live:
+            return []
+        tree = self.job_tree
+        if not tree:
+            return []
+        return self._findJobsToRun(tree.job_trees, mutex)
+
+    def _findJobsToRequest(self, job_trees):
+        build_set = self.current_build_set
+        toreq = []
+        if self.item_ahead:
+            if self.item_ahead.isHoldingFollowingChanges():
+                return []
+        for tree in job_trees:
+            job = tree.job
+            result = None
+            if job:
+                if not job.changeMatches(self.change):
+                    continue
+                build = build_set.getBuild(job.name)
+                if build:
+                    result = build.result
+                else:
+                    nodeset = build_set.getJobNodeSet(job.name)
+                    if nodeset is None:
+                        req = build_set.getJobNodeRequest(job.name)
+                        if req is None:
+                            toreq.append(job)
+            if result == 'SUCCESS' or not job:
+                toreq.extend(self._findJobsToRequest(tree.job_trees))
+        return toreq
+
+    def findJobsToRequest(self):
+        if not self.live:
+            return []
+        tree = self.job_tree
+        if not tree:
+            return []
+        return self._findJobsToRequest(tree.job_trees)
+
+    def setResult(self, build):
+        if build.retry:
+            self.removeBuild(build)
+        elif build.result != 'SUCCESS':
+            # Get a JobTree from a Job so we can find only its dependent jobs
+            tree = self.job_tree.getJobTreeForJob(build.job)
+            for job in tree.getJobs():
+                fakebuild = Build(job, None)
+                fakebuild.result = 'SKIPPED'
+                self.addBuild(fakebuild)
+
+    def setNodeRequestFailure(self, job):
+        fakebuild = Build(job, None)
+        self.addBuild(fakebuild)
+        fakebuild.result = 'NODE_FAILURE'
+        self.setResult(fakebuild)
+
+    def setDequeuedNeedingChange(self):
+        self.dequeued_needing_change = True
+        self._setAllJobsSkipped()
+
+    def setUnableToMerge(self):
+        self.current_build_set.unable_to_merge = True
+        self._setAllJobsSkipped()
+
+    def setConfigError(self, error):
+        self.current_build_set.config_error = error
+        self._setAllJobsSkipped()
+
+    def _setAllJobsSkipped(self):
+        for job in self.getJobs():
+            fakebuild = Build(job, None)
+            fakebuild.result = 'SKIPPED'
+            self.addBuild(fakebuild)
+
     def formatJobResult(self, job, url_pattern=None):
         build = self.current_build_set.getBuild(job.name)
         result = build.result
@@ -746,13 +1347,13 @@
         if result == 'SUCCESS':
             if job.success_message:
                 result = job.success_message
-            if job.success_pattern:
-                pattern = job.success_pattern
+            if job.success_url:
+                pattern = job.success_url
         elif result == 'FAILURE':
             if job.failure_message:
                 result = job.failure_message
-            if job.failure_pattern:
-                pattern = job.failure_pattern
+            if job.failure_url:
+                pattern = job.failure_url
         url = None
         if pattern:
             try:
@@ -797,7 +1398,7 @@
         else:
             ret['owner'] = None
         max_remaining = 0
-        for job in self.pipeline.getJobs(self):
+        for job in self.getJobs():
             now = time.time()
             build = self.current_build_set.getBuild(job.name)
             elapsed = None
@@ -849,13 +1450,12 @@
                 'pipeline': build.pipeline.name if build else None,
                 'canceled': build.canceled if build else None,
                 'retry': build.retry if build else None,
-                'number': build.number if build else None,
                 'node_labels': build.node_labels if build else [],
                 'node_name': build.node_name if build else None,
                 'worker': worker,
             })
 
-        if self.pipeline.haveAllJobsStarted(self):
+        if self.haveAllJobsStarted():
             ret['remaining_time'] = max_remaining
         else:
             ret['remaining_time'] = None
@@ -877,7 +1477,7 @@
                 changeish.project.name,
                 changeish._id(),
                 self.item_ahead)
-        for job in self.pipeline.getJobs(self):
+        for job in self.getJobs():
             build = self.current_build_set.getBuild(job.name)
             if build:
                 result = build.result
@@ -901,7 +1501,7 @@
 
 
 class Changeish(object):
-    """Something like a change; either a change or a ref"""
+    """Base class for Change and Ref."""
 
     def __init__(self, project):
         self.project = project
@@ -928,8 +1528,12 @@
     def getRelatedChanges(self):
         return set()
 
+    def updatesConfig(self):
+        return False
+
 
 class Change(Changeish):
+    """A proposed new state for a Project."""
     def __init__(self, project):
         super(Change, self).__init__(project)
         self.branch = None
@@ -979,8 +1583,14 @@
             related.update(c.getRelatedChanges())
         return related
 
+    def updatesConfig(self):
+        if 'zuul.yaml' in self.files or '.zuul.yaml' in self.files:
+            return True
+        return False
+
 
 class Ref(Changeish):
+    """An existing state of a Project."""
     def __init__(self, project):
         super(Ref, self).__init__(project)
         self.ref = None
@@ -1017,6 +1627,8 @@
 
 
 class NullChange(Changeish):
+    # TODOv3(jeblair): remove this in favor of enqueueing Refs (eg
+    # current master) instead.
     def __repr__(self):
         return '<NullChange for %s>' % (self.project)
 
@@ -1034,10 +1646,13 @@
 
 
 class TriggerEvent(object):
+    """Incoming event from an external system."""
     def __init__(self):
         self.data = None
         # common
         self.type = None
+        # For management events (eg: enqueue / promote)
+        self.tenant_name = None
         self.project_name = None
         self.trigger_name = None
         # Representation of the user account that performed the event.
@@ -1078,6 +1693,7 @@
 
 
 class BaseFilter(object):
+    """Base Class for filtering which Changes and Events to process."""
     def __init__(self, required_approvals=[], reject_approvals=[]):
         self._required_approvals = copy.deepcopy(required_approvals)
         self.required_approvals = self._tidy_approvals(required_approvals)
@@ -1169,6 +1785,7 @@
 
 
 class EventFilter(BaseFilter):
+    """Allows a Pipeline to only respond to certain events."""
     def __init__(self, trigger, types=[], branches=[], refs=[],
                  event_approvals={}, comments=[], emails=[], usernames=[],
                  timespecs=[], required_approvals=[], reject_approvals=[],
@@ -1325,6 +1942,7 @@
 
 
 class ChangeishFilter(BaseFilter):
+    """Allows a Manager to only enqueue Changes that meet certain criteria."""
     def __init__(self, open=None, current_patchset=None,
                  statuses=[], required_approvals=[],
                  reject_approvals=[]):
@@ -1374,27 +1992,290 @@
         return True
 
 
-class Layout(object):
+class ProjectPipelineConfig(object):
+    # Represents a project cofiguration in the context of a pipeline
     def __init__(self):
+        self.job_tree = None
+        self.queue_name = None
+        self.merge_mode = None
+
+
+class ProjectConfig(object):
+    # Represents a project cofiguration
+    def __init__(self, name):
+        self.name = name
+        self.merge_mode = None
+        self.pipelines = {}
+
+
+class UnparsedAbideConfig(object):
+    """A collection of yaml lists that has not yet been parsed into objects.
+
+    An Abide is a collection of tenants.
+    """
+
+    def __init__(self):
+        self.tenants = []
+
+    def extend(self, conf):
+        if isinstance(conf, UnparsedAbideConfig):
+            self.tenants.extend(conf.tenants)
+            return
+
+        if not isinstance(conf, list):
+            raise Exception("Configuration items must be in the form of "
+                            "a list of dictionaries (when parsing %s)" %
+                            (conf,))
+        for item in conf:
+            if not isinstance(item, dict):
+                raise Exception("Configuration items must be in the form of "
+                                "a list of dictionaries (when parsing %s)" %
+                                (conf,))
+            if len(item.keys()) > 1:
+                raise Exception("Configuration item dictionaries must have "
+                                "a single key (when parsing %s)" %
+                                (conf,))
+            key, value = item.items()[0]
+            if key == 'tenant':
+                self.tenants.append(value)
+            else:
+                raise Exception("Configuration item not recognized "
+                                "(when parsing %s)" %
+                                (conf,))
+
+
+class UnparsedTenantConfig(object):
+    """A collection of yaml lists that has not yet been parsed into objects."""
+
+    def __init__(self):
+        self.pipelines = []
+        self.jobs = []
+        self.project_templates = []
         self.projects = {}
+        self.nodesets = []
+
+    def copy(self):
+        r = UnparsedTenantConfig()
+        r.pipelines = copy.deepcopy(self.pipelines)
+        r.jobs = copy.deepcopy(self.jobs)
+        r.project_templates = copy.deepcopy(self.project_templates)
+        r.projects = copy.deepcopy(self.projects)
+        r.nodesets = copy.deepcopy(self.nodesets)
+        return r
+
+    def extend(self, conf, source_context=None):
+        if isinstance(conf, UnparsedTenantConfig):
+            self.pipelines.extend(conf.pipelines)
+            self.jobs.extend(conf.jobs)
+            self.project_templates.extend(conf.project_templates)
+            for k, v in conf.projects.items():
+                self.projects.setdefault(k, []).extend(v)
+            self.nodesets.extend(conf.nodesets)
+            return
+
+        if not isinstance(conf, list):
+            raise Exception("Configuration items must be in the form of "
+                            "a list of dictionaries (when parsing %s)" %
+                            (conf,))
+
+        if source_context is None:
+            raise Exception("A source context must be provided "
+                            "(when parsing %s)" % (conf,))
+
+        for item in conf:
+            if not isinstance(item, dict):
+                raise Exception("Configuration items must be in the form of "
+                                "a list of dictionaries (when parsing %s)" %
+                                (conf,))
+            if len(item.keys()) > 1:
+                raise Exception("Configuration item dictionaries must have "
+                                "a single key (when parsing %s)" %
+                                (conf,))
+            key, value = item.items()[0]
+            value['_source_context'] = source_context
+            if key == 'project':
+                name = value['name']
+                self.projects.setdefault(name, []).append(value)
+            elif key == 'job':
+                self.jobs.append(value)
+            elif key == 'project-template':
+                self.project_templates.append(value)
+            elif key == 'pipeline':
+                self.pipelines.append(value)
+            elif key == 'nodeset':
+                self.nodesets.append(value)
+            else:
+                raise Exception("Configuration item `%s` not recognized "
+                                "(when parsing %s)" %
+                                (item, conf,))
+
+
+class Layout(object):
+    """Holds all of the Pipelines."""
+
+    def __init__(self):
+        self.tenant = None
+        self.project_configs = {}
+        self.project_templates = {}
         self.pipelines = OrderedDict()
-        self.jobs = {}
-        self.metajobs = []
+        # This is a dictionary of name -> [jobs].  The first element
+        # of the list is the first job added with that name.  It is
+        # the reference definition for a given job.  Subsequent
+        # elements are aspects of that job with different matchers
+        # that override some attribute of the job.  These aspects all
+        # inherit from the reference definition.
+        self.jobs = {'noop': [Job('noop')]}
+        self.nodesets = {}
 
     def getJob(self, name):
         if name in self.jobs:
-            return self.jobs[name]
-        job = Job(name)
-        if job.is_metajob:
-            regex = re.compile(name)
-            self.metajobs.append((regex, job))
+            return self.jobs[name][0]
+        raise Exception("Job %s not defined" % (name,))
+
+    def getJobs(self, name):
+        return self.jobs.get(name, [])
+
+    def addJob(self, job):
+        # We can have multiple variants of a job all with the same
+        # name, but these variants must all be defined in the same repo.
+        prior_jobs = [j for j in self.getJobs(job.name) if
+                      j.source_context.project !=
+                      job.source_context.project]
+        if prior_jobs:
+            raise Exception("Job %s in %s is not permitted to shadow "
+                            "job %s in %s" % (
+                                job,
+                                job.source_context.project,
+                                prior_jobs[0],
+                                prior_jobs[0].source_context.project))
+
+        if job.name in self.jobs:
+            self.jobs[job.name].append(job)
         else:
-            # Apply attributes from matching meta-jobs
-            for regex, metajob in self.metajobs:
-                if regex.match(name):
-                    job.copy(metajob)
-            self.jobs[name] = job
-        return job
+            self.jobs[job.name] = [job]
+
+    def addNodeSet(self, nodeset):
+        if nodeset.name in self.nodesets:
+            raise Exception("NodeSet %s already defined" % (nodeset.name,))
+        self.nodesets[nodeset.name] = nodeset
+
+    def addPipeline(self, pipeline):
+        self.pipelines[pipeline.name] = pipeline
+
+    def addProjectTemplate(self, project_template):
+        self.project_templates[project_template.name] = project_template
+
+    def addProjectConfig(self, project_config, update_pipeline=True):
+        self.project_configs[project_config.name] = project_config
+        # TODOv3(jeblair): tidy up the relationship between pipelines
+        # and projects and projectconfigs.  Specifically, move
+        # job_trees out of the pipeline since they are more dynamic
+        # than pipelines.  Remove the update_pipeline argument
+        if not update_pipeline:
+            return
+        for pipeline_name, pipeline_config in project_config.pipelines.items():
+            pipeline = self.pipelines[pipeline_name]
+            project = pipeline.source.getProject(project_config.name)
+            pipeline.job_trees[project] = pipeline_config.job_tree
+
+    def _createJobTree(self, change, job_trees, parent):
+        for tree in job_trees:
+            job = tree.job
+            if not job.changeMatches(change):
+                continue
+            frozen_job = None
+            matched = False
+            for variant in self.getJobs(job.name):
+                if variant.changeMatches(change):
+                    if frozen_job is None:
+                        frozen_job = variant.copy()
+                        frozen_job.setRun()
+                    else:
+                        frozen_job.applyVariant(variant)
+                    matched = True
+            if not matched:
+                # A change must match at least one defined job variant
+                # (that is to say that it must match more than just
+                # the job that is defined in the tree).
+                continue
+            # If the job does not allow auth inheritance, do not allow
+            # the project-pipeline variant to update its execution
+            # attributes.
+            if frozen_job.auth and not frozen_job.auth.get('inherit'):
+                frozen_job.final = True
+            frozen_job.applyVariant(job)
+            frozen_tree = JobTree(frozen_job)
+            parent.job_trees.append(frozen_tree)
+            self._createJobTree(change, tree.job_trees, frozen_tree)
+
+    def createJobTree(self, item):
+        project_config = self.project_configs.get(
+            item.change.project.name, None)
+        ret = JobTree(None)
+        # NOTE(pabelanger): It is possible for a foreign project not to have a
+        # configured pipeline, if so return an empty JobTree.
+        if project_config and item.pipeline.name in project_config.pipelines:
+            project_tree = \
+                project_config.pipelines[item.pipeline.name].job_tree
+            self._createJobTree(item.change, project_tree.job_trees, ret)
+        return ret
+
+
+class Tenant(object):
+    def __init__(self, name):
+        self.name = name
+        self.layout = None
+        # The unparsed configuration from the main zuul config for
+        # this tenant.
+        self.unparsed_config = None
+        # The list of repos from which we will read main
+        # configuration.  (source, project)
+        self.config_repos = []
+        # The unparsed config from those repos.
+        self.config_repos_config = None
+        # The list of projects from which we will read in-repo
+        # configuration.  (source, project)
+        self.project_repos = []
+        # The unparsed config from those repos.
+        self.project_repos_config = None
+        # A mapping of source -> {config_repos: {}, project_repos: {}}
+        self.sources = {}
+
+    def addConfigRepo(self, source, project):
+        sd = self.sources.setdefault(source.name,
+                                     {'config_repos': {},
+                                      'project_repos': {}})
+        sd['config_repos'][project.name] = project
+
+    def addProjectRepo(self, source, project):
+        sd = self.sources.setdefault(source.name,
+                                     {'config_repos': {},
+                                      'project_repos': {}})
+        sd['project_repos'][project.name] = project
+
+    def getRepo(self, source, project_name):
+        """Get a project given a source and project name
+
+        Returns a tuple (trusted, project) or (None, None) if the
+        project is not found.
+
+        Trusted indicates the project is a config repo.
+
+        """
+
+        sd = self.sources.get(source)
+        if not sd:
+            return (None, None)
+        if project_name in sd['config_repos']:
+            return (True, sd['config_repos'][project_name])
+        if project_name in sd['project_repos']:
+            return (False, sd['project_repos'][project_name])
+        return (None, None)
+
+
+class Abide(object):
+    def __init__(self):
+        self.tenants = OrderedDict()
 
 
 class JobTimeData(object):
diff --git a/zuul/nodepool.py b/zuul/nodepool.py
new file mode 100644
index 0000000..d116a2b
--- /dev/null
+++ b/zuul/nodepool.py
@@ -0,0 +1,144 @@
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+from zuul import model
+
+
+class Nodepool(object):
+    log = logging.getLogger('zuul.nodepool')
+
+    def __init__(self, scheduler):
+        self.requests = {}
+        self.sched = scheduler
+
+    def requestNodes(self, build_set, job):
+        # Create a copy of the nodeset to represent the actual nodes
+        # returned by nodepool.
+        nodeset = job.nodeset.copy()
+        req = model.NodeRequest(build_set, job, nodeset)
+        self.requests[req.uid] = req
+
+        self.sched.zk.submitNodeRequest(req, self._updateNodeRequest)
+        # Logged after submission so that we have the request id
+        self.log.info("Submited node request %s" % (req,))
+
+        return req
+
+    def cancelRequest(self, request):
+        self.log.info("Canceling node request %s" % (request,))
+        if request.uid in self.requests:
+            try:
+                self.sched.zk.deleteNodeRequest(request)
+            except Exception:
+                self.log.exception("Error deleting node request:")
+            del self.requests[request.uid]
+
+    def useNodeSet(self, nodeset):
+        self.log.info("Setting nodeset %s in use" % (nodeset,))
+        for node in nodeset.getNodes():
+            if node.lock is None:
+                raise Exception("Node %s is not locked" % (node,))
+            node.state = model.STATE_IN_USE
+            self.sched.zk.storeNode(node)
+
+    def returnNodeSet(self, nodeset):
+        self.log.info("Returning nodeset %s" % (nodeset,))
+        for node in nodeset.getNodes():
+            if node.lock is None:
+                raise Exception("Node %s is not locked" % (node,))
+            if node.state == model.STATE_IN_USE:
+                node.state = model.STATE_USED
+                self.sched.zk.storeNode(node)
+        self._unlockNodes(nodeset.getNodes())
+
+    def unlockNodeSet(self, nodeset):
+        self._unlockNodes(nodeset.getNodes())
+
+    def _unlockNodes(self, nodes):
+        for node in nodes:
+            try:
+                self.sched.zk.unlockNode(node)
+            except Exception:
+                self.log.exception("Error unlocking node:")
+
+    def lockNodeSet(self, nodeset):
+        self._lockNodes(nodeset.getNodes())
+
+    def _lockNodes(self, nodes):
+        # Try to lock all of the supplied nodes.  If any lock fails,
+        # try to unlock any which have already been locked before
+        # re-raising the error.
+        locked_nodes = []
+        try:
+            for node in nodes:
+                self.log.debug("Locking node %s" % (node,))
+                self.sched.zk.lockNode(node)
+                locked_nodes.append(node)
+        except Exception:
+            self.log.exception("Error locking nodes:")
+            self._unlockNodes(locked_nodes)
+            raise
+
+    def _updateNodeRequest(self, request, deleted):
+        # Return False to indicate that we should stop watching the
+        # node.
+        self.log.debug("Updating node request %s" % (request,))
+
+        if request.uid not in self.requests:
+            return False
+
+        if request.state in (model.STATE_FULFILLED, model.STATE_FAILED):
+            self.log.info("Node request %s %s" % (request, request.state))
+
+            # Give our results to the scheduler.
+            self.sched.onNodesProvisioned(request)
+            del self.requests[request.uid]
+
+            # Stop watching this request node.
+            return False
+        # TODOv3(jeblair): handle allocation failure
+        elif deleted:
+            self.log.debug("Resubmitting lost node request %s" % (request,))
+            self.sched.zk.submitNodeRequest(request, self._updateNodeRequest)
+        return True
+
+    def acceptNodes(self, request):
+        # Called by the scheduler when it wants to accept and lock
+        # nodes for (potential) use.
+
+        self.log.info("Accepting node request %s" % (request,))
+
+        locked = False
+        if request.fulfilled:
+            # If the request suceeded, try to lock the nodes.
+            try:
+                self.lockNodeSet(request.nodeset)
+                locked = True
+            except Exception:
+                self.log.exception("Error locking nodes:")
+                request.failed = True
+
+        # Regardless of whether locking (or even the request)
+        # succeeded, delete the request.
+        self.log.debug("Deleting node request %s" % (request,))
+        try:
+            self.sched.zk.deleteNodeRequest(request)
+        except Exception:
+            self.log.exception("Error deleting node request:")
+            request.failed = True
+            # If deleting the request failed, and we did lock the
+            # nodes, unlock the nodes since we're not going to use
+            # them.
+            if locked:
+                self.unlockNodeSet(request.nodeset)
diff --git a/zuul/reporter/__init__.py b/zuul/reporter/__init__.py
index cd78412..6df3f1b 100644
--- a/zuul/reporter/__init__.py
+++ b/zuul/reporter/__init__.py
@@ -27,18 +27,15 @@
 
     log = logging.getLogger("zuul.reporter.BaseReporter")
 
-    def __init__(self, reporter_config={}, sched=None, connection=None):
-        self.reporter_config = reporter_config
-        self.sched = sched
+    def __init__(self, driver, connection, config=None):
+        self.driver = driver
         self.connection = connection
+        self.config = config or {}
         self._action = None
 
     def setAction(self, action):
         self._action = action
 
-    def stop(self):
-        """Stop the reporter."""
-
     @abc.abstractmethod
     def report(self, source, pipeline, item):
         """Send the compiled report message."""
@@ -64,6 +61,8 @@
         }
         return format_methods[self._action]
 
+    # TODOv3(jeblair): Consider removing pipeline argument in favor of
+    # item.pipeline
     def _formatItemReport(self, pipeline, item, with_jobs=True):
         """Format a report from the given items. Usually to provide results to
         a reporter taking free-form text."""
@@ -75,10 +74,7 @@
         return ret
 
     def _formatItemReportStart(self, pipeline, item, with_jobs=True):
-        msg = "Starting %s jobs." % pipeline.name
-        if self.sched.config.has_option('zuul', 'status_url'):
-            msg += "\n" + self.sched.config.get('zuul', 'status_url')
-        return msg
+        return pipeline.start_message.format(pipeline=pipeline)
 
     def _formatItemReportSuccess(self, pipeline, item, with_jobs=True):
         msg = pipeline.success_message
@@ -89,8 +85,10 @@
     def _formatItemReportFailure(self, pipeline, item, with_jobs=True):
         if item.dequeued_needing_change:
             msg = 'This change depends on a change that failed to merge.\n'
-        elif not pipeline.didMergerSucceed(item):
+        elif item.didMergerFail():
             msg = pipeline.merge_failure_message
+        elif item.getConfigError():
+            msg = item.getConfigError()
         else:
             msg = pipeline.failure_message
             if with_jobs:
@@ -112,12 +110,13 @@
         # Return the list of jobs portion of the report
         ret = ''
 
-        if self.sched.config.has_option('zuul', 'url_pattern'):
-            url_pattern = self.sched.config.get('zuul', 'url_pattern')
+        config = self.connection.sched.config
+        if config.has_option('zuul', 'url_pattern'):
+            url_pattern = config.get('zuul', 'url_pattern')
         else:
             url_pattern = None
 
-        for job in pipeline.getJobs(item):
+        for job in item.getJobs():
             build = item.current_build_set.getBuild(job.name)
             (result, url) = item.formatJobResult(job, url_pattern)
             if not job.voting:
@@ -125,9 +124,9 @@
             else:
                 voting = ''
 
-            if self.sched.config and self.sched.config.has_option(
+            if config and config.has_option(
                 'zuul', 'report_times'):
-                report_times = self.sched.config.getboolean(
+                report_times = config.getboolean(
                     'zuul', 'report_times')
             else:
                 report_times = True
@@ -145,9 +144,9 @@
             else:
                 elapsed = ''
             name = ''
-            if self.sched.config.has_option('zuul', 'job_name_in_report'):
-                if self.sched.config.getboolean('zuul',
-                                                'job_name_in_report'):
+            if config.has_option('zuul', 'job_name_in_report'):
+                if config.getboolean('zuul',
+                                     'job_name_in_report'):
                     name = job.name + ' '
             ret += '- %s%s : %s%s%s\n' % (name, url, result, elapsed,
                                           voting)
diff --git a/zuul/rpcclient.py b/zuul/rpcclient.py
index 609f636..9d81520 100644
--- a/zuul/rpcclient.py
+++ b/zuul/rpcclient.py
@@ -48,16 +48,19 @@
         self.log.debug("Job complete, success: %s" % (not job.failure))
         return job
 
-    def enqueue(self, pipeline, project, trigger, change):
-        data = {'pipeline': pipeline,
+    def enqueue(self, tenant, pipeline, project, trigger, change):
+        data = {'tenant': tenant,
+                'pipeline': pipeline,
                 'project': project,
                 'trigger': trigger,
                 'change': change,
                 }
         return not self.submitJob('zuul:enqueue', data).failure
 
-    def enqueue_ref(self, pipeline, project, trigger, ref, oldrev, newrev):
-        data = {'pipeline': pipeline,
+    def enqueue_ref(
+            self, tenant, pipeline, project, trigger, ref, oldrev, newrev):
+        data = {'tenant': tenant,
+                'pipeline': pipeline,
                 'project': project,
                 'trigger': trigger,
                 'ref': ref,
@@ -66,8 +69,9 @@
                 }
         return not self.submitJob('zuul:enqueue_ref', data).failure
 
-    def promote(self, pipeline, change_ids):
-        data = {'pipeline': pipeline,
+    def promote(self, tenant, pipeline, change_ids):
+        data = {'tenant': tenant,
+                'pipeline': pipeline,
                 'change_ids': change_ids,
                 }
         return not self.submitJob('zuul:promote', data).failure
diff --git a/zuul/rpclistener.py b/zuul/rpclistener.py
index 716dcfb..c780df4 100644
--- a/zuul/rpclistener.py
+++ b/zuul/rpclistener.py
@@ -88,24 +88,34 @@
         args = json.loads(job.arguments)
         event = model.TriggerEvent()
         errors = ''
+        tenant = None
+        project = None
+        pipeline = None
 
-        trigger = self.sched.triggers.get(args['trigger'])
-        if trigger:
-            event.trigger_name = args['trigger']
-        else:
-            errors += 'Invalid trigger: %s\n' % (args['trigger'],)
+        tenant = self.sched.abide.tenants.get(args['tenant'])
+        if tenant:
+            event.tenant_name = args['tenant']
 
-        project = self.sched.layout.projects.get(args['project'])
-        if project:
-            event.project_name = args['project']
-        else:
-            errors += 'Invalid project: %s\n' % (args['project'],)
+            project = tenant.layout.project_configs.get(args['project'])
+            if project:
+                event.project_name = args['project']
+            else:
+                errors += 'Invalid project: %s\n' % (args['project'],)
 
-        pipeline = self.sched.layout.pipelines.get(args['pipeline'])
-        if pipeline:
-            event.forced_pipeline = args['pipeline']
+            pipeline = tenant.layout.pipelines.get(args['pipeline'])
+            if pipeline:
+                event.forced_pipeline = args['pipeline']
+
+                for trigger in pipeline.triggers:
+                    if trigger.name == args['trigger']:
+                        event.trigger_name = args['trigger']
+                        continue
+                if not event.trigger_name:
+                    errors += 'Invalid trigger: %s\n' % (args['trigger'],)
+            else:
+                errors += 'Invalid pipeline: %s\n' % (args['pipeline'],)
         else:
-            errors += 'Invalid pipeline: %s\n' % (args['pipeline'],)
+            errors += 'Invalid tenant: %s\n' % (args['tenant'],)
 
         return (args, event, errors, pipeline, project)
 
@@ -141,19 +151,21 @@
 
     def handle_promote(self, job):
         args = json.loads(job.arguments)
+        tenant_name = args['tenant']
         pipeline_name = args['pipeline']
         change_ids = args['change_ids']
-        self.sched.promote(pipeline_name, change_ids)
+        self.sched.promote(tenant_name, pipeline_name, change_ids)
         job.sendWorkComplete()
 
     def handle_get_running_jobs(self, job):
         # args = json.loads(job.arguments)
         # TODO: use args to filter by pipeline etc
         running_items = []
-        for pipeline_name, pipeline in six.iteritems(
-                self.sched.layout.pipelines):
-            for queue in pipeline.queues:
-                for item in queue.queue:
-                    running_items.append(item.formatJSON())
+        for tenant in self.sched.abide.tenants.values():
+            for pipeline_name, pipeline in six.iteritems(
+                    tenant.layout.pipelines):
+                for queue in pipeline.queues:
+                    for item in queue.queue:
+                        running_items.append(item.formatJSON())
 
         job.sendWorkComplete(json.dumps(running_items))
diff --git a/zuul/scheduler.py b/zuul/scheduler.py
index 931571f..2679522 100644
--- a/zuul/scheduler.py
+++ b/zuul/scheduler.py
@@ -22,43 +22,15 @@
 import pickle
 import six
 from six.moves import queue as Queue
-import re
 import sys
 import threading
 import time
-import yaml
 
-from zuul import layoutvalidator
+from zuul import configloader
 from zuul import model
-from zuul.model import Pipeline, Project, ChangeQueue
-from zuul.model import ChangeishFilter, NullChange
-from zuul import change_matcher, exceptions
+from zuul import exceptions
 from zuul import version as zuul_version
 
-statsd = extras.try_import('statsd.statsd')
-
-
-def deep_format(obj, paramdict):
-    """Apply the paramdict via str.format() to all string objects found within
-       the supplied obj. Lists and dicts are traversed recursively.
-
-       Borrowed from Jenkins Job Builder project"""
-    if isinstance(obj, str):
-        ret = obj.format(**paramdict)
-    elif isinstance(obj, list):
-        ret = []
-        for item in obj:
-            ret.append(deep_format(item, paramdict))
-    elif isinstance(obj, dict):
-        ret = {}
-        for item in obj:
-            exp_item = item.format(**paramdict)
-
-            ret[exp_item] = deep_format(obj[item], paramdict)
-    else:
-        ret = obj
-    return ret
-
 
 class MutexHandler(object):
     log = logging.getLogger("zuul.MutexHandler")
@@ -153,16 +125,29 @@
         self.config = config
 
 
+class TenantReconfigureEvent(ManagementEvent):
+    """Reconfigure the given tenant.  The layout will be (re-)loaded from
+    the path specified in the configuration.
+
+    :arg Tenant tenant: the tenant to reconfigure
+    """
+    def __init__(self, tenant):
+        super(TenantReconfigureEvent, self).__init__()
+        self.tenant = tenant
+
+
 class PromoteEvent(ManagementEvent):
     """Promote one or more changes to the head of the queue.
 
+    :arg str tenant_name: the name of the tenant
     :arg str pipeline_name: the name of the pipeline
     :arg list change_ids: a list of strings of change ids in the form
         1234,1
     """
 
-    def __init__(self, pipeline_name, change_ids):
+    def __init__(self, tenant_name, pipeline_name, change_ids):
         super(PromoteEvent, self).__init__()
+        self.tenant_name = tenant_name
         self.pipeline_name = pipeline_name
         self.change_ids = change_ids
 
@@ -216,12 +201,25 @@
     :arg str commit: The SHA of the merged commit (changes with refs).
     """
 
-    def __init__(self, build_set, zuul_url, merged, updated, commit):
+    def __init__(self, build_set, zuul_url, merged, updated, commit,
+                 files):
         self.build_set = build_set
         self.zuul_url = zuul_url
         self.merged = merged
         self.updated = updated
         self.commit = commit
+        self.files = files
+
+
+class NodesProvisionedEvent(ResultEvent):
+    """Nodes have been provisioned for a build_set
+
+    :arg BuildSet build_set: The build_set which has nodes.
+    :arg list of Node objects nodes: The provisioned nodes
+    """
+
+    def __init__(self, request):
+        self.request = request
 
 
 def toList(item):
@@ -233,6 +231,26 @@
 
 
 class Scheduler(threading.Thread):
+    """The engine of Zuul.
+
+    The Scheduler is reponsible for recieving events and dispatching
+    them to appropriate components (including pipeline managers,
+    mergers and launchers).
+
+    It runs a single threaded main loop which processes events
+    received one at a time and takes action as appropriate.  Other
+    parts of Zuul may run in their own thread, but synchronization is
+    performed within the scheduler to reduce or eliminate the need for
+    locking in most circumstances.
+
+    The main daemon will have one instance of the Scheduler class
+    running which will persist for the life of the process.  The
+    Scheduler instance is supplied to other Zuul components so that
+    they can submit events or otherwise communicate with other
+    components.
+
+    """
+
     log = logging.getLogger("zuul.Scheduler")
 
     def __init__(self, config, testonly=False):
@@ -246,8 +264,10 @@
         self._stopped = False
         self.launcher = None
         self.merger = None
+        self.connections = None
+        self.statsd = extras.try_import('statsd.statsd')
+        # TODO(jeblair): fix this
         self.mutex = MutexHandler()
-        self.connections = dict()
         # Despite triggers being part of the pipeline, there is one trigger set
         # per scheduler. The pipeline handles the trigger filters but since
         # the events are handled by the scheduler itself it needs to handle
@@ -259,7 +279,7 @@
         self.trigger_event_queue = Queue.Queue()
         self.result_event_queue = Queue.Queue()
         self.management_event_queue = Queue.Queue()
-        self.layout = model.Layout()
+        self.abide = model.Abide()
 
         if not testonly:
             time_dir = self._get_time_database_dir()
@@ -268,358 +288,19 @@
         self.zuul_version = zuul_version.version_info.release_string()
         self.last_reconfigured = None
 
-        # A set of reporter configuration keys to action mapping
-        self._reporter_actions = {
-            'start': 'start_actions',
-            'success': 'success_actions',
-            'failure': 'failure_actions',
-            'merge-failure': 'merge_failure_actions',
-            'disabled': 'disabled_actions',
-        }
-
     def stop(self):
         self._stopped = True
-        self._unloadDrivers()
         self.stopConnections()
         self.wake_event.set()
 
-    def testConfig(self, config_path, connections):
-        # Take the list of set up connections directly here rather than with
-        # registerConnections as we don't want to do the onLoad event yet.
-        return self._parseConfig(config_path, connections)
-
-    def _parseSkipIf(self, config_job):
-        cm = change_matcher
-        skip_matchers = []
-
-        for config_skip in config_job.get('skip-if', []):
-            nested_matchers = []
-
-            project_regex = config_skip.get('project')
-            if project_regex:
-                nested_matchers.append(cm.ProjectMatcher(project_regex))
-
-            branch_regex = config_skip.get('branch')
-            if branch_regex:
-                nested_matchers.append(cm.BranchMatcher(branch_regex))
-
-            file_regexes = toList(config_skip.get('all-files-match-any'))
-            if file_regexes:
-                file_matchers = [cm.FileMatcher(x) for x in file_regexes]
-                all_files_matcher = cm.MatchAllFiles(file_matchers)
-                nested_matchers.append(all_files_matcher)
-
-            # All patterns need to match a given skip-if predicate
-            skip_matchers.append(cm.MatchAll(nested_matchers))
-
-        if skip_matchers:
-            # Any skip-if predicate can be matched to trigger a skip
-            return cm.MatchAny(skip_matchers)
-
     def registerConnections(self, connections, load=True):
         # load: whether or not to trigger the onLoad for the connection. This
         # is useful for not doing a full load during layout validation.
         self.connections = connections
-        for connection_name, connection in self.connections.items():
-            connection.registerScheduler(self)
-            if load:
-                connection.onLoad()
+        self.connections.registerScheduler(self, load)
 
     def stopConnections(self):
-        for connection_name, connection in self.connections.items():
-            connection.onStop()
-
-    def _unloadDrivers(self):
-        for trigger in self.triggers.values():
-            trigger.stop()
-        self.triggers = {}
-        for pipeline in self.layout.pipelines.values():
-            pipeline.source.stop()
-            for action in self._reporter_actions.values():
-                for reporter in pipeline.__getattribute__(action):
-                    reporter.stop()
-
-    def _getDriver(self, dtype, connection_name, driver_config={}):
-        # Instantiate a driver such as a trigger, source or reporter
-        # TODO(jhesketh): Make this list dynamic or use entrypoints etc.
-        # Stevedore was not a good fit here due to the nature of triggers.
-        # Specifically we don't want to load a trigger per a pipeline as one
-        # trigger can listen to a stream (from gerrit, for example) and the
-        # scheduler decides which eventfilter to use. As such we want to load
-        # trigger+connection pairs uniquely.
-        drivers = {
-            'source': {
-                'gerrit': 'zuul.source.gerrit:GerritSource',
-            },
-            'trigger': {
-                'gerrit': 'zuul.trigger.gerrit:GerritTrigger',
-                'timer': 'zuul.trigger.timer:TimerTrigger',
-                'zuul': 'zuul.trigger.zuultrigger:ZuulTrigger',
-            },
-            'reporter': {
-                'gerrit': 'zuul.reporter.gerrit:GerritReporter',
-                'smtp': 'zuul.reporter.smtp:SMTPReporter',
-                'sql': 'zuul.reporter.sql:SQLReporter',
-            },
-        }
-
-        # TODO(jhesketh): Check the connection_name exists
-        if connection_name in self.connections.keys():
-            driver_name = self.connections[connection_name].driver_name
-            connection = self.connections[connection_name]
-        else:
-            # In some cases a driver may not be related to a connection. For
-            # example, the 'timer' or 'zuul' triggers.
-            driver_name = connection_name
-            connection = None
-        driver = drivers[dtype][driver_name].split(':')
-        driver_instance = getattr(
-            __import__(driver[0], fromlist=['']), driver[1])(
-                driver_config, self, connection
-        )
-
-        if connection:
-            connection.registerUse(dtype, driver_instance)
-
-        return driver_instance
-
-    def _getSourceDriver(self, connection_name):
-        return self._getDriver('source', connection_name)
-
-    def _getReporterDriver(self, connection_name, driver_config={}):
-        return self._getDriver('reporter', connection_name, driver_config)
-
-    def _getTriggerDriver(self, connection_name, driver_config={}):
-        return self._getDriver('trigger', connection_name, driver_config)
-
-    def _parseConfig(self, config_path, connections):
-        layout = model.Layout()
-        project_templates = {}
-
-        if config_path:
-            config_path = os.path.expanduser(config_path)
-            if not os.path.exists(config_path):
-                raise Exception("Unable to read layout config file at %s" %
-                                config_path)
-        with open(config_path) as config_file:
-            data = yaml.load(config_file)
-
-        validator = layoutvalidator.LayoutValidator()
-        validator.validate(data, connections)
-
-        config_env = {}
-        for include in data.get('includes', []):
-            if 'python-file' in include:
-                fn = include['python-file']
-                if not os.path.isabs(fn):
-                    base = os.path.dirname(os.path.realpath(config_path))
-                    fn = os.path.join(base, fn)
-                fn = os.path.expanduser(fn)
-                with open(fn) as _f:
-                    code = compile(_f.read(), fn, 'exec')
-                    six.exec_(code, config_env)
-
-        for conf_pipeline in data.get('pipelines', []):
-            pipeline = Pipeline(conf_pipeline['name'])
-            pipeline.description = conf_pipeline.get('description')
-            # TODO(jeblair): remove backwards compatibility:
-            pipeline.source = self._getSourceDriver(
-                conf_pipeline.get('source', 'gerrit'))
-            precedence = model.PRECEDENCE_MAP[conf_pipeline.get('precedence')]
-            pipeline.precedence = precedence
-            pipeline.failure_message = conf_pipeline.get('failure-message',
-                                                         "Build failed.")
-            pipeline.merge_failure_message = conf_pipeline.get(
-                'merge-failure-message', "Merge Failed.\n\nThis change or one "
-                "of its cross-repo dependencies was unable to be "
-                "automatically merged with the current state of its "
-                "repository. Please rebase the change and upload a new "
-                "patchset.")
-            pipeline.success_message = conf_pipeline.get('success-message',
-                                                         "Build succeeded.")
-            pipeline.footer_message = conf_pipeline.get('footer-message', "")
-            pipeline.dequeue_on_new_patchset = conf_pipeline.get(
-                'dequeue-on-new-patchset', True)
-            pipeline.ignore_dependencies = conf_pipeline.get(
-                'ignore-dependencies', False)
-
-            for conf_key, action in self._reporter_actions.items():
-                reporter_set = []
-                if conf_pipeline.get(conf_key):
-                    for reporter_name, params \
-                        in conf_pipeline.get(conf_key).items():
-                        reporter = self._getReporterDriver(reporter_name,
-                                                           params)
-                        reporter.setAction(conf_key)
-                        reporter_set.append(reporter)
-                setattr(pipeline, action, reporter_set)
-
-            # If merge-failure actions aren't explicit, use the failure actions
-            if not pipeline.merge_failure_actions:
-                pipeline.merge_failure_actions = pipeline.failure_actions
-
-            pipeline.disable_at = conf_pipeline.get(
-                'disable-after-consecutive-failures', None)
-
-            pipeline.window = conf_pipeline.get('window', 20)
-            pipeline.window_floor = conf_pipeline.get('window-floor', 3)
-            pipeline.window_increase_type = conf_pipeline.get(
-                'window-increase-type', 'linear')
-            pipeline.window_increase_factor = conf_pipeline.get(
-                'window-increase-factor', 1)
-            pipeline.window_decrease_type = conf_pipeline.get(
-                'window-decrease-type', 'exponential')
-            pipeline.window_decrease_factor = conf_pipeline.get(
-                'window-decrease-factor', 2)
-
-            manager = globals()[conf_pipeline['manager']](self, pipeline)
-            pipeline.setManager(manager)
-            layout.pipelines[conf_pipeline['name']] = pipeline
-
-            if 'require' in conf_pipeline or 'reject' in conf_pipeline:
-                require = conf_pipeline.get('require', {})
-                reject = conf_pipeline.get('reject', {})
-                f = ChangeishFilter(
-                    open=require.get('open'),
-                    current_patchset=require.get('current-patchset'),
-                    statuses=toList(require.get('status')),
-                    required_approvals=toList(require.get('approval')),
-                    reject_approvals=toList(reject.get('approval'))
-                )
-                manager.changeish_filters.append(f)
-
-            for trigger_name, trigger_config\
-                in conf_pipeline.get('trigger').items():
-                if trigger_name not in self.triggers.keys():
-                    self.triggers[trigger_name] = \
-                        self._getTriggerDriver(trigger_name, trigger_config)
-
-            for trigger_name, trigger in self.triggers.items():
-                if trigger_name in conf_pipeline['trigger']:
-                    manager.event_filters += trigger.getEventFilters(
-                        conf_pipeline['trigger'][trigger_name])
-
-        for project_template in data.get('project-templates', []):
-            # Make sure the template only contains valid pipelines
-            tpl = dict(
-                (pipe_name, project_template.get(pipe_name))
-                for pipe_name in layout.pipelines.keys()
-                if pipe_name in project_template
-            )
-            project_templates[project_template.get('name')] = tpl
-
-        for config_job in data.get('jobs', []):
-            job = layout.getJob(config_job['name'])
-            # Be careful to only set attributes explicitly present on
-            # this job, to avoid squashing attributes set by a meta-job.
-            m = config_job.get('queue-name', None)
-            if m:
-                job.queue_name = m
-            m = config_job.get('failure-message', None)
-            if m:
-                job.failure_message = m
-            m = config_job.get('success-message', None)
-            if m:
-                job.success_message = m
-            m = config_job.get('failure-pattern', None)
-            if m:
-                job.failure_pattern = m
-            m = config_job.get('success-pattern', None)
-            if m:
-                job.success_pattern = m
-            m = config_job.get('hold-following-changes', False)
-            if m:
-                job.hold_following_changes = True
-            job.attempts = config_job.get('attempts', 3)
-            m = config_job.get('voting', None)
-            if m is not None:
-                job.voting = m
-            m = config_job.get('mutex', None)
-            if m is not None:
-                job.mutex = m
-            tags = toList(config_job.get('tags'))
-            if tags:
-                # Tags are merged via a union rather than a
-                # destructive copy because they are intended to
-                # accumulate onto any previously applied tags from
-                # metajobs.
-                job.tags = job.tags.union(set(tags))
-            fname = config_job.get('parameter-function', None)
-            if fname:
-                func = config_env.get(fname, None)
-                if not func:
-                    raise Exception("Unable to find function %s" % fname)
-                job.parameter_function = func
-            branches = toList(config_job.get('branch'))
-            if branches:
-                job._branches = branches
-                job.branches = [re.compile(x) for x in branches]
-            files = toList(config_job.get('files'))
-            if files:
-                job._files = files
-                job.files = [re.compile(x) for x in files]
-            skip_if_matcher = self._parseSkipIf(config_job)
-            if skip_if_matcher:
-                job.skip_if_matcher = skip_if_matcher
-            swift = toList(config_job.get('swift'))
-            if swift:
-                for s in swift:
-                    job.swift[s['name']] = s
-
-        def add_jobs(job_tree, config_jobs):
-            for job in config_jobs:
-                if isinstance(job, list):
-                    for x in job:
-                        add_jobs(job_tree, x)
-                if isinstance(job, dict):
-                    for parent, children in job.items():
-                        parent_tree = job_tree.addJob(layout.getJob(parent))
-                        add_jobs(parent_tree, children)
-                if isinstance(job, str):
-                    job_tree.addJob(layout.getJob(job))
-
-        for config_project in data.get('projects', []):
-            project = Project(config_project['name'])
-            shortname = config_project['name'].split('/')[-1]
-
-            # This is reversed due to the prepend operation below, so
-            # the ultimate order is templates (in order) followed by
-            # statically defined jobs.
-            for requested_template in reversed(
-                config_project.get('template', [])):
-                # Fetch the template from 'project-templates'
-                tpl = project_templates.get(
-                    requested_template.get('name'))
-                # Expand it with the project context
-                requested_template['name'] = shortname
-                expanded = deep_format(tpl, requested_template)
-                # Finally merge the expansion with whatever has been
-                # already defined for this project.  Prepend our new
-                # jobs to existing ones (which may have been
-                # statically defined or defined by other templates).
-                for pipeline in layout.pipelines.values():
-                    if pipeline.name in expanded:
-                        config_project.update(
-                            {pipeline.name: expanded[pipeline.name] +
-                             config_project.get(pipeline.name, [])})
-
-            layout.projects[config_project['name']] = project
-            mode = config_project.get('merge-mode', 'merge-resolve')
-            project.merge_mode = model.MERGER_MAP[mode]
-            for pipeline in layout.pipelines.values():
-                if pipeline.name in config_project:
-                    job_tree = pipeline.addProject(project)
-                    config_jobs = config_project[pipeline.name]
-                    add_jobs(job_tree, config_jobs)
-
-        # All jobs should be defined at this point, get rid of
-        # metajobs so that getJob isn't doing anything weird.
-        layout.metajobs = []
-
-        for pipeline in layout.pipelines.values():
-            pipeline.manager._postConfig(layout)
-
-        return layout
+        self.connections.stop()
 
     def setLauncher(self, launcher):
         self.launcher = launcher
@@ -627,24 +308,17 @@
     def setMerger(self, merger):
         self.merger = merger
 
-    def getProject(self, name):
-        self.layout_lock.acquire()
-        p = None
-        try:
-            p = self.layout.projects.get(name)
-            if p is None:
-                self.log.info("Registering foreign project: %s" % name)
-                p = Project(name, foreign=True)
-                self.layout.projects[name] = p
-        finally:
-            self.layout_lock.release()
-        return p
+    def setNodepool(self, nodepool):
+        self.nodepool = nodepool
+
+    def setZooKeeper(self, zk):
+        self.zk = zk
 
     def addEvent(self, event):
         self.log.debug("Adding trigger event: %s" % event)
         try:
-            if statsd:
-                statsd.incr('gerrit.event.%s' % event.type)
+            if self.statsd:
+                self.statsd.incr('gerrit.event.%s' % event.type)
         except:
             self.log.exception("Exception reporting event stats")
         self.trigger_event_queue.put(event)
@@ -669,10 +343,10 @@
         # timing) is recorded before setting the result.
         build.result = result
         try:
-            if statsd and build.pipeline:
+            if self.statsd and build.pipeline:
                 jobname = build.job.name.replace('.', '_')
                 key = 'zuul.pipeline.%s.all_jobs' % build.pipeline.name
-                statsd.incr(key)
+                self.statsd.incr(key)
                 for label in build.node_labels:
                     # Jenkins includes the node name in its list of labels, so
                     # we filter it out here, since that is not statistically
@@ -682,18 +356,18 @@
                     dt = int((build.start_time - build.launch_time) * 1000)
                     key = 'zuul.pipeline.%s.label.%s.wait_time' % (
                         build.pipeline.name, label)
-                    statsd.timing(key, dt)
+                    self.statsd.timing(key, dt)
                 key = 'zuul.pipeline.%s.job.%s.%s' % (build.pipeline.name,
                                                       jobname, build.result)
                 if build.result in ['SUCCESS', 'FAILURE'] and build.start_time:
                     dt = int((build.end_time - build.start_time) * 1000)
-                    statsd.timing(key, dt)
-                statsd.incr(key)
+                    self.statsd.timing(key, dt)
+                self.statsd.incr(key)
 
                 key = 'zuul.pipeline.%s.job.%s.wait_time' % (
                     build.pipeline.name, jobname)
                 dt = int((build.start_time - build.launch_time) * 1000)
-                statsd.timing(key, dt)
+                self.statsd.timing(key, dt)
         except:
             self.log.exception("Exception reporting runtime stats")
         event = BuildCompletedEvent(build)
@@ -701,14 +375,28 @@
         self.wake_event.set()
         self.log.debug("Done adding complete event for build: %s" % build)
 
-    def onMergeCompleted(self, build_set, zuul_url, merged, updated, commit):
+    def onMergeCompleted(self, build_set, zuul_url, merged, updated,
+                         commit, files):
         self.log.debug("Adding merge complete event for build set: %s" %
                        build_set)
-        event = MergeCompletedEvent(build_set, zuul_url,
-                                    merged, updated, commit)
+        event = MergeCompletedEvent(build_set, zuul_url, merged,
+                                    updated, commit, files)
         self.result_event_queue.put(event)
         self.wake_event.set()
 
+    def onNodesProvisioned(self, req):
+        self.log.debug("Adding nodes provisioned event for build set: %s" %
+                       req.build_set)
+        event = NodesProvisionedEvent(req)
+        self.result_event_queue.put(event)
+        self.wake_event.set()
+
+    def reconfigureTenant(self, tenant):
+        self.log.debug("Prepare to reconfigure")
+        event = TenantReconfigureEvent(tenant)
+        self.management_event_queue.put(event)
+        self.wake_event.set()
+
     def reconfigure(self, config):
         self.log.debug("Prepare to reconfigure")
         event = ReconfigureEvent(config)
@@ -718,9 +406,10 @@
         event.wait()
         self.log.debug("Reconfiguration complete")
         self.last_reconfigured = int(time.time())
+        # TODOv3(jeblair): reconfigure time should be per-tenant
 
-    def promote(self, pipeline_name, change_ids):
-        event = PromoteEvent(pipeline_name, change_ids)
+    def promote(self, tenant_name, pipeline_name, change_ids):
+        event = PromoteEvent(tenant_name, pipeline_name, change_ids)
         self.management_event_queue.put(event)
         self.wake_event.set()
         self.log.debug("Waiting for promotion")
@@ -813,87 +502,112 @@
         self.config = event.config
         try:
             self.log.debug("Performing reconfiguration")
-            self._unloadDrivers()
-            layout = self._parseConfig(
-                self.config.get('zuul', 'layout_config'), self.connections)
-            for name, new_pipeline in layout.pipelines.items():
-                old_pipeline = self.layout.pipelines.get(name)
-                if not old_pipeline:
-                    if self.layout.pipelines:
-                        # Don't emit this warning on startup
-                        self.log.warning("No old pipeline matching %s found "
-                                         "when reconfiguring" % name)
-                    continue
-                self.log.debug("Re-enqueueing changes for pipeline %s" % name)
-                items_to_remove = []
-                builds_to_cancel = []
-                last_head = None
-                for shared_queue in old_pipeline.queues:
-                    for item in shared_queue.queue:
-                        if not item.item_ahead:
-                            last_head = item
-                        item.item_ahead = None
-                        item.items_behind = []
-                        item.pipeline = None
-                        item.queue = None
-                        project_name = item.change.project.name
-                        item.change.project = layout.projects.get(project_name)
-                        if not item.change.project:
-                            self.log.debug("Project %s not defined, "
-                                           "re-instantiating as foreign" %
-                                           project_name)
-                            project = Project(project_name, foreign=True)
-                            layout.projects[project_name] = project
-                            item.change.project = project
-                        item_jobs = new_pipeline.getJobs(item)
-                        for build in item.current_build_set.getBuilds():
-                            job = layout.jobs.get(build.job.name)
-                            if job and job in item_jobs:
-                                build.job = job
-                            else:
-                                item.removeBuild(build)
-                                builds_to_cancel.append(build)
-                        if not new_pipeline.manager.reEnqueueItem(item,
-                                                                  last_head):
-                            items_to_remove.append(item)
-                for item in items_to_remove:
-                    for build in item.current_build_set.getBuilds():
-                        builds_to_cancel.append(build)
-                for build in builds_to_cancel:
-                    self.log.warning(
-                        "Canceling build %s during reconfiguration" % (build,))
-                    try:
-                        self.launcher.cancel(build)
-                    except Exception:
-                        self.log.exception(
-                            "Exception while canceling build %s "
-                            "for change %s" % (build, item.change))
-                    finally:
-                        self.mutex.release(build.build_set.item, build.job)
-            self.layout = layout
-            self.maintainConnectionCache()
-            for trigger in self.triggers.values():
-                trigger.postConfig()
-            for pipeline in self.layout.pipelines.values():
-                pipeline.source.postConfig()
-                for action in self._reporter_actions.values():
-                    for reporter in pipeline.__getattribute__(action):
-                        reporter.postConfig()
-            if statsd:
-                try:
-                    for pipeline in self.layout.pipelines.values():
-                        items = len(pipeline.getAllItems())
-                        # stats.gauges.zuul.pipeline.NAME.current_changes
-                        key = 'zuul.pipeline.%s' % pipeline.name
-                        statsd.gauge(key + '.current_changes', items)
-                except Exception:
-                    self.log.exception("Exception reporting initial "
-                                       "pipeline stats:")
+            loader = configloader.ConfigLoader()
+            abide = loader.loadConfig(
+                self.config.get('zuul', 'tenant_config'),
+                self, self.merger, self.connections)
+            for tenant in abide.tenants.values():
+                self._reconfigureTenant(tenant)
+            self.abide = abide
         finally:
             self.layout_lock.release()
 
+    def _doTenantReconfigureEvent(self, event):
+        # This is called in the scheduler loop after another thread submits
+        # a request
+        self.layout_lock.acquire()
+        try:
+            self.log.debug("Performing tenant reconfiguration")
+            loader = configloader.ConfigLoader()
+            abide = loader.reloadTenant(
+                self.config.get('zuul', 'tenant_config'),
+                self, self.merger, self.connections,
+                self.abide, event.tenant)
+            tenant = abide.tenants[event.tenant.name]
+            self._reconfigureTenant(tenant)
+            self.abide = abide
+        finally:
+            self.layout_lock.release()
+
+    def _reenqueueTenant(self, old_tenant, tenant):
+        for name, new_pipeline in tenant.layout.pipelines.items():
+            old_pipeline = old_tenant.layout.pipelines.get(name)
+            if not old_pipeline:
+                self.log.warning("No old pipeline matching %s found "
+                                 "when reconfiguring" % name)
+                continue
+            self.log.debug("Re-enqueueing changes for pipeline %s" % name)
+            items_to_remove = []
+            builds_to_cancel = []
+            last_head = None
+            for shared_queue in old_pipeline.queues:
+                for item in shared_queue.queue:
+                    if not item.item_ahead:
+                        last_head = item
+                    item.item_ahead = None
+                    item.items_behind = []
+                    item.pipeline = None
+                    item.queue = None
+                    project_name = item.change.project.name
+                    item.change.project = new_pipeline.source.getProject(
+                        project_name)
+                    if new_pipeline.manager.reEnqueueItem(item,
+                                                          last_head):
+                        new_jobs = item.getJobs()
+                        for build in item.current_build_set.getBuilds():
+                            jobtree = item.job_tree.getJobTreeForJob(build.job)
+                            if jobtree and jobtree.job in new_jobs:
+                                build.job = jobtree.job
+                            else:
+                                item.removeBuild(build)
+                                builds_to_cancel.append(build)
+                    else:
+                        items_to_remove.append(item)
+            for item in items_to_remove:
+                for build in item.current_build_set.getBuilds():
+                    builds_to_cancel.append(build)
+            for build in builds_to_cancel:
+                self.log.warning(
+                    "Canceling build %s during reconfiguration" % (build,))
+                try:
+                    self.launcher.cancel(build)
+                except Exception:
+                    self.log.exception(
+                        "Exception while canceling build %s "
+                        "for change %s" % (build, item.change))
+                finally:
+                    self.mutex.release(build.build_set.item, build.job)
+
+    def _reconfigureTenant(self, tenant):
+        # This is called from _doReconfigureEvent while holding the
+        # layout lock
+        old_tenant = self.abide.tenants.get(tenant.name)
+        if old_tenant:
+            self._reenqueueTenant(old_tenant, tenant)
+        # TODOv3(jeblair): update for tenants
+        # self.maintainConnectionCache()
+        self.connections.reconfigureDrivers(tenant)
+        # TODOv3(jeblair): remove postconfig calls?
+        for pipeline in tenant.layout.pipelines.values():
+            pipeline.source.postConfig()
+            for trigger in pipeline.triggers:
+                trigger.postConfig(pipeline)
+            for reporter in pipeline.actions:
+                reporter.postConfig()
+        if self.statsd:
+            try:
+                for pipeline in tenant.layout.pipelines.values():
+                    items = len(pipeline.getAllItems())
+                    # stats.gauges.zuul.pipeline.NAME.current_changes
+                    key = 'zuul.pipeline.%s' % pipeline.name
+                    self.statsd.gauge(key + '.current_changes', items)
+            except Exception:
+                self.log.exception("Exception reporting initial "
+                                   "pipeline stats:")
+
     def _doPromoteEvent(self, event):
-        pipeline = self.layout.pipelines[event.pipeline_name]
+        tenant = self.abide.tenants.get(event.tenant_name)
+        pipeline = tenant.layout.pipelines[event.pipeline_name]
         change_ids = [c.split(',') for c in event.change_ids]
         items_to_enqueue = []
         change_queue = None
@@ -932,35 +646,35 @@
                 ignore_requirements=True)
 
     def _doEnqueueEvent(self, event):
-        project = self.layout.projects.get(event.project_name)
-        pipeline = self.layout.pipelines[event.forced_pipeline]
+        tenant = self.abide.tenants.get(event.tenant_name)
+        project = tenant.layout.project_configs.get(event.project_name)
+        pipeline = tenant.layout.pipelines[event.forced_pipeline]
         change = pipeline.source.getChange(event, project)
         self.log.debug("Event %s for change %s was directly assigned "
                        "to pipeline %s" % (event, change, self))
-        self.log.info("Adding %s, %s to %s" %
-                      (project, change, pipeline))
         pipeline.manager.addChange(change, ignore_requirements=True)
 
     def _areAllBuildsComplete(self):
         self.log.debug("Checking if all builds are complete")
-        waiting = False
         if self.merger.areMergesOutstanding():
-            waiting = True
-        for pipeline in self.layout.pipelines.values():
-            for item in pipeline.getAllItems():
-                for build in item.current_build_set.getBuilds():
-                    if build.result is None:
-                        self.log.debug("%s waiting on %s" %
-                                       (pipeline.manager, build))
-                        waiting = True
+            self.log.debug("Waiting on merger")
+            return False
+        waiting = False
+        for tenant in self.abide.tenants.values():
+            for pipeline in tenant.layout.pipelines.values():
+                for item in pipeline.getAllItems():
+                    for build in item.current_build_set.getBuilds():
+                        if build.result is None:
+                            self.log.debug("%s waiting on %s" %
+                                           (pipeline.manager, build))
+                            waiting = True
         if not waiting:
             self.log.debug("All builds are complete")
             return True
-        self.log.debug("All builds are not complete")
         return False
 
     def run(self):
-        if statsd:
+        if self.statsd:
             self.log.debug("Statsd enabled")
         else:
             self.log.debug("Statsd disabled because python statsd "
@@ -979,7 +693,7 @@
                     self.process_management_queue()
 
                 # Give result events priority -- they let us stop builds,
-                # whereas trigger evensts cause us to launch builds.
+                # whereas trigger events cause us to launch builds.
                 while not self.result_event_queue.empty():
                     self.process_result_queue()
 
@@ -990,9 +704,10 @@
                 if self._pause and self._areAllBuildsComplete():
                     self._doPauseEvent()
 
-                for pipeline in self.layout.pipelines.values():
-                    while pipeline.manager.processQueue():
-                        pass
+                for tenant in self.abide.tenants.values():
+                    for pipeline in tenant.layout.pipelines.values():
+                        while pipeline.manager.processQueue():
+                            pass
 
             except Exception:
                 self.log.exception("Exception in run handler:")
@@ -1002,12 +717,16 @@
                 self.run_handler_lock.release()
 
     def maintainConnectionCache(self):
+        # TODOv3(jeblair): update for tenants
         relevant = set()
-        for pipeline in self.layout.pipelines.values():
-            self.log.debug("Gather relevant cache items for: %s" % pipeline)
-            for item in pipeline.getAllItems():
-                relevant.add(item.change)
-                relevant.update(item.change.getRelatedChanges())
+        for tenant in self.abide.tenants.values():
+            for pipeline in tenant.layout.pipelines.values():
+                self.log.debug("Gather relevant cache items for: %s" %
+                               pipeline)
+
+                for item in pipeline.getAllItems():
+                    relevant.add(item.change)
+                    relevant.update(item.change.getRelatedChanges())
         for connection in self.connections.values():
             connection.maintainCache(relevant)
             self.log.debug(
@@ -1019,31 +738,37 @@
         event = self.trigger_event_queue.get()
         self.log.debug("Processing trigger event %s" % event)
         try:
-            project = self.layout.projects.get(event.project_name)
-
-            for pipeline in self.layout.pipelines.values():
-                # Get the change even if the project is unknown to us for the
-                # use of updating the cache if there is another change
-                # depending on this foreign one.
-                try:
-                    change = pipeline.source.getChange(event, project)
-                except exceptions.ChangeNotFound as e:
-                    self.log.debug("Unable to get change %s from source %s. "
-                                   "(most likely looking for a change from "
-                                   "another connection trigger)",
-                                   e.change, pipeline.source)
-                    continue
-                if not project or project.foreign:
-                    self.log.debug("Project %s not found" % event.project_name)
-                    continue
-                if event.type == 'patchset-created':
-                    pipeline.manager.removeOldVersionsOfChange(change)
-                elif event.type == 'change-abandoned':
-                    pipeline.manager.removeAbandonedChange(change)
-                if pipeline.manager.eventMatches(event, change):
-                    self.log.info("Adding %s, %s to %s" %
-                                  (project, change, pipeline))
-                    pipeline.manager.addChange(change)
+            for tenant in self.abide.tenants.values():
+                reconfigured_tenant = False
+                for pipeline in tenant.layout.pipelines.values():
+                    # Get the change even if the project is unknown to
+                    # us for the use of updating the cache if there is
+                    # another change depending on this foreign one.
+                    try:
+                        change = pipeline.source.getChange(event)
+                    except exceptions.ChangeNotFound as e:
+                        self.log.debug("Unable to get change %s from "
+                                       "source %s (most likely looking "
+                                       "for a change from another "
+                                       "connection trigger)",
+                                       e.change, pipeline.source)
+                        continue
+                    if (event.type == 'change-merged' and
+                        hasattr(change, 'files') and
+                        not reconfigured_tenant and
+                        change.updatesConfig()):
+                        # The change that just landed updates the config.
+                        # Clear out cached data for this project and
+                        # perform a reconfiguration.
+                        change.project.unparsed_config = None
+                        self.reconfigureTenant(tenant)
+                        reconfigured_tenant = True
+                    if event.type == 'patchset-created':
+                        pipeline.manager.removeOldVersionsOfChange(change)
+                    elif event.type == 'change-abandoned':
+                        pipeline.manager.removeAbandonedChange(change)
+                    if pipeline.manager.eventMatches(event, change):
+                        pipeline.manager.addChange(change)
         finally:
             self.trigger_event_queue.task_done()
 
@@ -1054,6 +779,8 @@
         try:
             if isinstance(event, ReconfigureEvent):
                 self._doReconfigureEvent(event)
+            elif isinstance(event, TenantReconfigureEvent):
+                self._doTenantReconfigureEvent(event)
             elif isinstance(event, PromoteEvent):
                 self._doPromoteEvent(event)
             elif isinstance(event, EnqueueEvent):
@@ -1076,6 +803,8 @@
                 self._doBuildCompletedEvent(event)
             elif isinstance(event, MergeCompletedEvent):
                 self._doMergeCompletedEvent(event)
+            elif isinstance(event, NodesProvisionedEvent):
+                self._doNodesProvisionedEvent(event)
             else:
                 self.log.error("Unable to handle event %s" % event)
         finally:
@@ -1101,9 +830,19 @@
 
     def _doBuildCompletedEvent(self, event):
         build = event.build
+
+        # Regardless of any other conditions which might cause us not
+        # to pass this on to the pipeline manager, make sure we return
+        # the nodes to nodepool.
+        try:
+            nodeset = build.build_set.getJobNodeSet(build.job.name)
+            self.nodepool.returnNodeSet(nodeset)
+        except Exception:
+            self.log.exception("Unable to return nodeset %s" % (nodeset,))
+
         if build.build_set is not build.build_set.item.current_build_set:
-            self.log.warning("Build %s is not in the current build set" %
-                             (build,))
+            self.log.debug("Build %s is not in the current build set" %
+                           (build,))
             return
         pipeline = build.build_set.item.pipeline
         if not pipeline:
@@ -1131,7 +870,28 @@
             return
         pipeline.manager.onMergeCompleted(event)
 
-    def formatStatusJSON(self):
+    def _doNodesProvisionedEvent(self, event):
+        request = event.request
+        build_set = request.build_set
+
+        self.nodepool.acceptNodes(request)
+
+        if build_set is not build_set.item.current_build_set:
+            self.log.warning("Build set %s is not current" % (build_set,))
+            if request.fulfilled:
+                self.nodepool.returnNodeSet(request.nodeset)
+            return
+        pipeline = build_set.item.pipeline
+        if not pipeline:
+            self.log.warning("Build set %s is not associated with a pipeline" %
+                             (build_set,))
+            if request.fulfilled:
+                self.nodepool.returnNodeSet(request.nodeset)
+            return
+        pipeline.manager.onNodesProvisioned(event)
+
+    def formatStatusJSON(self, tenant_name):
+        # TODOv3(jeblair): use tenants
         if self.config.has_option('zuul', 'url_pattern'):
             url_pattern = self.config.get('zuul', 'url_pattern')
         else:
@@ -1161,1036 +921,7 @@
 
         pipelines = []
         data['pipelines'] = pipelines
-        for pipeline in self.layout.pipelines.values():
+        tenant = self.abide.tenants.get(tenant_name)
+        for pipeline in tenant.layout.pipelines.values():
             pipelines.append(pipeline.formatStatusJSON(url_pattern))
         return json.dumps(data)
-
-
-class BasePipelineManager(object):
-    log = logging.getLogger("zuul.BasePipelineManager")
-
-    def __init__(self, sched, pipeline):
-        self.sched = sched
-        self.pipeline = pipeline
-        self.event_filters = []
-        self.changeish_filters = []
-
-    def __str__(self):
-        return "<%s %s>" % (self.__class__.__name__, self.pipeline.name)
-
-    def _postConfig(self, layout):
-        self.log.info("Configured Pipeline Manager %s" % self.pipeline.name)
-        self.log.info("  Source: %s" % self.pipeline.source)
-        self.log.info("  Requirements:")
-        for f in self.changeish_filters:
-            self.log.info("    %s" % f)
-        self.log.info("  Events:")
-        for e in self.event_filters:
-            self.log.info("    %s" % e)
-        self.log.info("  Projects:")
-
-        def log_jobs(tree, indent=0):
-            istr = '    ' + ' ' * indent
-            if tree.job:
-                efilters = ''
-                for b in tree.job._branches:
-                    efilters += str(b)
-                for f in tree.job._files:
-                    efilters += str(f)
-                if tree.job.skip_if_matcher:
-                    efilters += str(tree.job.skip_if_matcher)
-                if efilters:
-                    efilters = ' ' + efilters
-                tags = []
-                if tree.job.hold_following_changes:
-                    tags.append('[hold]')
-                if not tree.job.voting:
-                    tags.append('[nonvoting]')
-                if tree.job.mutex:
-                    tags.append('[mutex: %s]' % tree.job.mutex)
-                tags = ' '.join(tags)
-                self.log.info("%s%s%s %s" % (istr, repr(tree.job),
-                                             efilters, tags))
-            for x in tree.job_trees:
-                log_jobs(x, indent + 2)
-
-        for p in layout.projects.values():
-            tree = self.pipeline.getJobTree(p)
-            if tree:
-                self.log.info("    %s" % p)
-                log_jobs(tree)
-        self.log.info("  On start:")
-        self.log.info("    %s" % self.pipeline.start_actions)
-        self.log.info("  On success:")
-        self.log.info("    %s" % self.pipeline.success_actions)
-        self.log.info("  On failure:")
-        self.log.info("    %s" % self.pipeline.failure_actions)
-        self.log.info("  On merge-failure:")
-        self.log.info("    %s" % self.pipeline.merge_failure_actions)
-        self.log.info("  When disabled:")
-        self.log.info("    %s" % self.pipeline.disabled_actions)
-
-    def getSubmitAllowNeeds(self):
-        # Get a list of code review labels that are allowed to be
-        # "needed" in the submit records for a change, with respect
-        # to this queue.  In other words, the list of review labels
-        # this queue itself is likely to set before submitting.
-        allow_needs = set()
-        for action_reporter in self.pipeline.success_actions:
-            allow_needs.update(action_reporter.getSubmitAllowNeeds())
-        return allow_needs
-
-    def eventMatches(self, event, change):
-        if event.forced_pipeline:
-            if event.forced_pipeline == self.pipeline.name:
-                self.log.debug("Event %s for change %s was directly assigned "
-                               "to pipeline %s" % (event, change, self))
-                return True
-            else:
-                return False
-        for ef in self.event_filters:
-            if ef.matches(event, change):
-                self.log.debug("Event %s for change %s matched %s "
-                               "in pipeline %s" % (event, change, ef, self))
-                return True
-        return False
-
-    def isChangeAlreadyInPipeline(self, change):
-        # Checks live items in the pipeline
-        for item in self.pipeline.getAllItems():
-            if item.live and change.equals(item.change):
-                return True
-        return False
-
-    def isChangeAlreadyInQueue(self, change, change_queue):
-        # Checks any item in the specified change queue
-        for item in change_queue.queue:
-            if change.equals(item.change):
-                return True
-        return False
-
-    def reportStart(self, item):
-        if not self.pipeline._disabled:
-            try:
-                self.log.info("Reporting start, action %s item %s" %
-                              (self.pipeline.start_actions, item))
-                ret = self.sendReport(self.pipeline.start_actions,
-                                      self.pipeline.source, item)
-                if ret:
-                    self.log.error("Reporting item start %s received: %s" %
-                                   (item, ret))
-            except:
-                self.log.exception("Exception while reporting start:")
-
-    def sendReport(self, action_reporters, source, item,
-                   message=None):
-        """Sends the built message off to configured reporters.
-
-        Takes the action_reporters, item, message and extra options and
-        sends them to the pluggable reporters.
-        """
-        report_errors = []
-        if len(action_reporters) > 0:
-            for reporter in action_reporters:
-                ret = reporter.report(source, self.pipeline, item)
-                if ret:
-                    report_errors.append(ret)
-            if len(report_errors) == 0:
-                return
-        return report_errors
-
-    def isChangeReadyToBeEnqueued(self, change):
-        return True
-
-    def enqueueChangesAhead(self, change, quiet, ignore_requirements,
-                            change_queue):
-        return True
-
-    def enqueueChangesBehind(self, change, quiet, ignore_requirements,
-                             change_queue):
-        return True
-
-    def checkForChangesNeededBy(self, change, change_queue):
-        return True
-
-    def getFailingDependentItems(self, item):
-        return None
-
-    def getDependentItems(self, item):
-        orig_item = item
-        items = []
-        while item.item_ahead:
-            items.append(item.item_ahead)
-            item = item.item_ahead
-        self.log.info("Change %s depends on changes %s" %
-                      (orig_item.change,
-                       [x.change for x in items]))
-        return items
-
-    def getItemForChange(self, change):
-        for item in self.pipeline.getAllItems():
-            if item.change.equals(change):
-                return item
-        return None
-
-    def findOldVersionOfChangeAlreadyInQueue(self, change):
-        for item in self.pipeline.getAllItems():
-            if not item.live:
-                continue
-            if change.isUpdateOf(item.change):
-                return item
-        return None
-
-    def removeOldVersionsOfChange(self, change):
-        if not self.pipeline.dequeue_on_new_patchset:
-            return
-        old_item = self.findOldVersionOfChangeAlreadyInQueue(change)
-        if old_item:
-            self.log.debug("Change %s is a new version of %s, removing %s" %
-                           (change, old_item.change, old_item))
-            self.removeItem(old_item)
-
-    def removeAbandonedChange(self, change):
-        self.log.debug("Change %s abandoned, removing." % change)
-        for item in self.pipeline.getAllItems():
-            if not item.live:
-                continue
-            if item.change.equals(change):
-                self.removeItem(item)
-
-    def reEnqueueItem(self, item, last_head):
-        with self.getChangeQueue(item.change, last_head.queue) as change_queue:
-            if change_queue:
-                self.log.debug("Re-enqueing change %s in queue %s" %
-                               (item.change, change_queue))
-                change_queue.enqueueItem(item)
-
-                # Re-set build results in case any new jobs have been
-                # added to the tree.
-                for build in item.current_build_set.getBuilds():
-                    if build.result:
-                        self.pipeline.setResult(item, build)
-                # Similarly, reset the item state.
-                if item.current_build_set.unable_to_merge:
-                    self.pipeline.setUnableToMerge(item)
-                if item.dequeued_needing_change:
-                    self.pipeline.setDequeuedNeedingChange(item)
-
-                self.reportStats(item)
-                return True
-            else:
-                self.log.error("Unable to find change queue for project %s" %
-                               item.change.project)
-                return False
-
-    def addChange(self, change, quiet=False, enqueue_time=None,
-                  ignore_requirements=False, live=True,
-                  change_queue=None):
-        self.log.debug("Considering adding change %s" % change)
-
-        # If we are adding a live change, check if it's a live item
-        # anywhere in the pipeline.  Otherwise, we will perform the
-        # duplicate check below on the specific change_queue.
-        if live and self.isChangeAlreadyInPipeline(change):
-            self.log.debug("Change %s is already in pipeline, "
-                           "ignoring" % change)
-            return True
-
-        if not self.isChangeReadyToBeEnqueued(change):
-            self.log.debug("Change %s is not ready to be enqueued, ignoring" %
-                           change)
-            return False
-
-        if not ignore_requirements:
-            for f in self.changeish_filters:
-                if not f.matches(change):
-                    self.log.debug("Change %s does not match pipeline "
-                                   "requirement %s" % (change, f))
-                    return False
-
-        with self.getChangeQueue(change, change_queue) as change_queue:
-            if not change_queue:
-                self.log.debug("Unable to find change queue for "
-                               "change %s in project %s" %
-                               (change, change.project))
-                return False
-
-            if not self.enqueueChangesAhead(change, quiet, ignore_requirements,
-                                            change_queue):
-                self.log.debug("Failed to enqueue changes "
-                               "ahead of %s" % change)
-                return False
-
-            if self.isChangeAlreadyInQueue(change, change_queue):
-                self.log.debug("Change %s is already in queue, "
-                               "ignoring" % change)
-                return True
-
-            self.log.debug("Adding change %s to queue %s" %
-                           (change, change_queue))
-            item = change_queue.enqueueChange(change)
-            if enqueue_time:
-                item.enqueue_time = enqueue_time
-            item.live = live
-            self.reportStats(item)
-            if not quiet:
-                if len(self.pipeline.start_actions) > 0:
-                    self.reportStart(item)
-            self.enqueueChangesBehind(change, quiet, ignore_requirements,
-                                      change_queue)
-            for trigger in self.sched.triggers.values():
-                trigger.onChangeEnqueued(item.change, self.pipeline)
-            return True
-
-    def dequeueItem(self, item):
-        self.log.debug("Removing change %s from queue" % item.change)
-        item.queue.dequeueItem(item)
-
-    def removeItem(self, item):
-        # Remove an item from the queue, probably because it has been
-        # superseded by another change.
-        self.log.debug("Canceling builds behind change: %s "
-                       "because it is being removed." % item.change)
-        self.cancelJobs(item)
-        self.dequeueItem(item)
-        self.reportStats(item)
-
-    def _makeMergerItem(self, item):
-        # Create a dictionary with all info about the item needed by
-        # the merger.
-        number = None
-        patchset = None
-        oldrev = None
-        newrev = None
-        if hasattr(item.change, 'number'):
-            number = item.change.number
-            patchset = item.change.patchset
-        elif hasattr(item.change, 'newrev'):
-            oldrev = item.change.oldrev
-            newrev = item.change.newrev
-        connection_name = self.pipeline.source.connection.connection_name
-        return dict(project=item.change.project.name,
-                    url=self.pipeline.source.getGitUrl(
-                        item.change.project),
-                    connection_name=connection_name,
-                    merge_mode=item.change.project.merge_mode,
-                    refspec=item.change.refspec,
-                    branch=item.change.branch,
-                    ref=item.current_build_set.ref,
-                    number=number,
-                    patchset=patchset,
-                    oldrev=oldrev,
-                    newrev=newrev,
-                    )
-
-    def prepareRef(self, item):
-        # Returns True if the ref is ready, false otherwise
-        build_set = item.current_build_set
-        if build_set.merge_state == build_set.COMPLETE:
-            return True
-        if build_set.merge_state == build_set.PENDING:
-            return False
-        ref = build_set.ref
-        if hasattr(item.change, 'refspec') and not ref:
-            self.log.debug("Preparing ref for: %s" % item.change)
-            item.current_build_set.setConfiguration()
-            dependent_items = self.getDependentItems(item)
-            dependent_items.reverse()
-            all_items = dependent_items + [item]
-            merger_items = map(self._makeMergerItem, all_items)
-            self.sched.merger.mergeChanges(merger_items,
-                                           item.current_build_set,
-                                           self.pipeline.precedence)
-        else:
-            self.log.debug("Preparing update repo for: %s" % item.change)
-            url = self.pipeline.source.getGitUrl(item.change.project)
-            connection_name = self.pipeline.source.connection.connection_name
-            self.sched.merger.updateRepo(item.change.project.name,
-                                         connection_name, url, build_set,
-                                         self.pipeline.precedence)
-        # merge:merge has been emitted properly:
-        build_set.merge_state = build_set.PENDING
-        return False
-
-    def _launchJobs(self, item, jobs):
-        self.log.debug("Launching jobs for change %s" % item.change)
-        dependent_items = self.getDependentItems(item)
-        for job in jobs:
-            self.log.debug("Found job %s for change %s" % (job, item.change))
-            try:
-                build = self.sched.launcher.launch(job, item,
-                                                   self.pipeline,
-                                                   dependent_items)
-                self.log.debug("Adding build %s of job %s to item %s" %
-                               (build, job, item))
-                item.addBuild(build)
-            except:
-                self.log.exception("Exception while launching job %s "
-                                   "for change %s:" % (job, item.change))
-
-    def launchJobs(self, item):
-        jobs = self.pipeline.findJobsToRun(item, self.sched.mutex)
-        if jobs:
-            self._launchJobs(item, jobs)
-
-    def cancelJobs(self, item, prime=True):
-        self.log.debug("Cancel jobs for change %s" % item.change)
-        canceled = False
-        old_build_set = item.current_build_set
-        if prime and item.current_build_set.ref:
-            item.resetAllBuilds()
-        for build in old_build_set.getBuilds():
-            try:
-                self.sched.launcher.cancel(build)
-            except:
-                self.log.exception("Exception while canceling build %s "
-                                   "for change %s" % (build, item.change))
-            finally:
-                self.sched.mutex.release(build.build_set.item, build.job)
-            build.result = 'CANCELED'
-            canceled = True
-        self.updateBuildDescriptions(old_build_set)
-        for item_behind in item.items_behind:
-            self.log.debug("Canceling jobs for change %s, behind change %s" %
-                           (item_behind.change, item.change))
-            if self.cancelJobs(item_behind, prime=prime):
-                canceled = True
-        return canceled
-
-    def _processOneItem(self, item, nnfi):
-        changed = False
-        item_ahead = item.item_ahead
-        if item_ahead and (not item_ahead.live):
-            item_ahead = None
-        change_queue = item.queue
-        failing_reasons = []  # Reasons this item is failing
-
-        if self.checkForChangesNeededBy(item.change, change_queue) is not True:
-            # It's not okay to enqueue this change, we should remove it.
-            self.log.info("Dequeuing change %s because "
-                          "it can no longer merge" % item.change)
-            self.cancelJobs(item)
-            self.dequeueItem(item)
-            self.pipeline.setDequeuedNeedingChange(item)
-            if item.live:
-                try:
-                    self.reportItem(item)
-                except exceptions.MergeFailure:
-                    pass
-            return (True, nnfi)
-        dep_items = self.getFailingDependentItems(item)
-        actionable = change_queue.isActionable(item)
-        item.active = actionable
-        ready = False
-        if dep_items:
-            failing_reasons.append('a needed change is failing')
-            self.cancelJobs(item, prime=False)
-        else:
-            item_ahead_merged = False
-            if (item_ahead and item_ahead.change.is_merged):
-                item_ahead_merged = True
-            if (item_ahead != nnfi and not item_ahead_merged):
-                # Our current base is different than what we expected,
-                # and it's not because our current base merged.  Something
-                # ahead must have failed.
-                self.log.info("Resetting builds for change %s because the "
-                              "item ahead, %s, is not the nearest non-failing "
-                              "item, %s" % (item.change, item_ahead, nnfi))
-                change_queue.moveItem(item, nnfi)
-                changed = True
-                self.cancelJobs(item)
-            if actionable:
-                ready = self.prepareRef(item)
-                if item.current_build_set.unable_to_merge:
-                    failing_reasons.append("it has a merge conflict")
-                    ready = False
-        if actionable and ready and self.launchJobs(item):
-            changed = True
-        if self.pipeline.didAnyJobFail(item):
-            failing_reasons.append("at least one job failed")
-        if (not item.live) and (not item.items_behind):
-            failing_reasons.append("is a non-live item with no items behind")
-            self.dequeueItem(item)
-            changed = True
-        if ((not item_ahead) and self.pipeline.areAllJobsComplete(item)
-            and item.live):
-            try:
-                self.reportItem(item)
-            except exceptions.MergeFailure:
-                failing_reasons.append("it did not merge")
-                for item_behind in item.items_behind:
-                    self.log.info("Resetting builds for change %s because the "
-                                  "item ahead, %s, failed to merge" %
-                                  (item_behind.change, item))
-                    self.cancelJobs(item_behind)
-            self.dequeueItem(item)
-            changed = True
-        elif not failing_reasons and item.live:
-            nnfi = item
-        item.current_build_set.failing_reasons = failing_reasons
-        if failing_reasons:
-            self.log.debug("%s is a failing item because %s" %
-                           (item, failing_reasons))
-        return (changed, nnfi)
-
-    def processQueue(self):
-        # Do whatever needs to be done for each change in the queue
-        self.log.debug("Starting queue processor: %s" % self.pipeline.name)
-        changed = False
-        for queue in self.pipeline.queues:
-            queue_changed = False
-            nnfi = None  # Nearest non-failing item
-            for item in queue.queue[:]:
-                item_changed, nnfi = self._processOneItem(
-                    item, nnfi)
-                if item_changed:
-                    queue_changed = True
-                self.reportStats(item)
-            if queue_changed:
-                changed = True
-                status = ''
-                for item in queue.queue:
-                    status += item.formatStatus()
-                if status:
-                    self.log.debug("Queue %s status is now:\n %s" %
-                                   (queue.name, status))
-        self.log.debug("Finished queue processor: %s (changed: %s)" %
-                       (self.pipeline.name, changed))
-        return changed
-
-    def updateBuildDescriptions(self, build_set):
-        for build in build_set.getBuilds():
-            try:
-                desc = self.formatDescription(build)
-                self.sched.launcher.setBuildDescription(build, desc)
-            except:
-                # Log the failure and let loop continue
-                self.log.error("Failed to update description for build %s" %
-                               (build))
-
-        if build_set.previous_build_set:
-            for build in build_set.previous_build_set.getBuilds():
-                try:
-                    desc = self.formatDescription(build)
-                    self.sched.launcher.setBuildDescription(build, desc)
-                except:
-                    # Log the failure and let loop continue
-                    self.log.error("Failed to update description for "
-                                   "build %s in previous build set" % (build))
-
-    def onBuildStarted(self, build):
-        self.log.debug("Build %s started" % build)
-        return True
-
-    def onBuildCompleted(self, build):
-        self.log.debug("Build %s completed" % build)
-        item = build.build_set.item
-
-        self.pipeline.setResult(item, build)
-        self.sched.mutex.release(item, build.job)
-        self.log.debug("Item %s status is now:\n %s" %
-                       (item, item.formatStatus()))
-        return True
-
-    def onMergeCompleted(self, event):
-        build_set = event.build_set
-        item = build_set.item
-        build_set.merge_state = build_set.COMPLETE
-        build_set.zuul_url = event.zuul_url
-        if event.merged:
-            build_set.commit = event.commit
-        elif event.updated:
-            if not isinstance(item.change, NullChange):
-                build_set.commit = item.change.newrev
-        if not build_set.commit and not isinstance(item.change, NullChange):
-            self.log.info("Unable to merge change %s" % item.change)
-            self.pipeline.setUnableToMerge(item)
-
-    def reportItem(self, item):
-        if not item.reported:
-            # _reportItem() returns True if it failed to report.
-            item.reported = not self._reportItem(item)
-        if self.changes_merge:
-            succeeded = self.pipeline.didAllJobsSucceed(item)
-            merged = item.reported
-            if merged:
-                merged = self.pipeline.source.isMerged(item.change,
-                                                       item.change.branch)
-            self.log.info("Reported change %s status: all-succeeded: %s, "
-                          "merged: %s" % (item.change, succeeded, merged))
-            change_queue = item.queue
-            if not (succeeded and merged):
-                self.log.debug("Reported change %s failed tests or failed "
-                               "to merge" % (item.change))
-                change_queue.decreaseWindowSize()
-                self.log.debug("%s window size decreased to %s" %
-                               (change_queue, change_queue.window))
-                raise exceptions.MergeFailure(
-                    "Change %s failed to merge" % item.change)
-            else:
-                change_queue.increaseWindowSize()
-                self.log.debug("%s window size increased to %s" %
-                               (change_queue, change_queue.window))
-
-                for trigger in self.sched.triggers.values():
-                    trigger.onChangeMerged(item.change, self.pipeline.source)
-
-    def _reportItem(self, item):
-        self.log.debug("Reporting change %s" % item.change)
-        ret = True  # Means error as returned by trigger.report
-        if not self.pipeline.getJobs(item):
-            # We don't send empty reports with +1,
-            # and the same for -1's (merge failures or transient errors)
-            # as they cannot be followed by +1's
-            self.log.debug("No jobs for change %s" % item.change)
-            actions = []
-        elif self.pipeline.didAllJobsSucceed(item):
-            self.log.debug("success %s" % (self.pipeline.success_actions))
-            actions = self.pipeline.success_actions
-            item.setReportedResult('SUCCESS')
-            self.pipeline._consecutive_failures = 0
-        elif not self.pipeline.didMergerSucceed(item):
-            actions = self.pipeline.merge_failure_actions
-            item.setReportedResult('MERGER_FAILURE')
-        else:
-            actions = self.pipeline.failure_actions
-            item.setReportedResult('FAILURE')
-            self.pipeline._consecutive_failures += 1
-        if self.pipeline._disabled:
-            actions = self.pipeline.disabled_actions
-        # Check here if we should disable so that we only use the disabled
-        # reporters /after/ the last disable_at failure is still reported as
-        # normal.
-        if (self.pipeline.disable_at and not self.pipeline._disabled and
-            self.pipeline._consecutive_failures >= self.pipeline.disable_at):
-            self.pipeline._disabled = True
-        if actions:
-            try:
-                self.log.info("Reporting item %s, actions: %s" %
-                              (item, actions))
-                ret = self.sendReport(actions, self.pipeline.source, item)
-                if ret:
-                    self.log.error("Reporting item %s received: %s" %
-                                   (item, ret))
-            except:
-                self.log.exception("Exception while reporting:")
-                item.setReportedResult('ERROR')
-        self.updateBuildDescriptions(item.current_build_set)
-        return ret
-
-    def formatDescription(self, build):
-        concurrent_changes = ''
-        concurrent_builds = ''
-        other_builds = ''
-
-        for change in build.build_set.other_changes:
-            concurrent_changes += '<li><a href="{change.url}">\
-              {change.number},{change.patchset}</a></li>'.format(
-                change=change)
-
-        change = build.build_set.item.change
-
-        for build in build.build_set.getBuilds():
-            if build.url:
-                concurrent_builds += """\
-<li>
-  <a href="{build.url}">
-  {build.job.name} #{build.number}</a>: {build.result}
-</li>
-""".format(build=build)
-            else:
-                concurrent_builds += """\
-<li>
-  {build.job.name}: {build.result}
-</li>""".format(build=build)
-
-        if build.build_set.previous_build_set:
-            other_build = build.build_set.previous_build_set.getBuild(
-                build.job.name)
-            if other_build:
-                other_builds += """\
-<li>
-  Preceded by: <a href="{build.url}">
-  {build.job.name} #{build.number}</a>
-</li>
-""".format(build=other_build)
-
-        if build.build_set.next_build_set:
-            other_build = build.build_set.next_build_set.getBuild(
-                build.job.name)
-            if other_build:
-                other_builds += """\
-<li>
-  Succeeded by: <a href="{build.url}">
-  {build.job.name} #{build.number}</a>
-</li>
-""".format(build=other_build)
-
-        result = build.build_set.result
-
-        if hasattr(change, 'number'):
-            ret = """\
-<p>
-  Triggered by change:
-    <a href="{change.url}">{change.number},{change.patchset}</a><br/>
-  Branch: <b>{change.branch}</b><br/>
-  Pipeline: <b>{self.pipeline.name}</b>
-</p>"""
-        elif hasattr(change, 'ref'):
-            ret = """\
-<p>
-  Triggered by reference:
-    {change.ref}</a><br/>
-  Old revision: <b>{change.oldrev}</b><br/>
-  New revision: <b>{change.newrev}</b><br/>
-  Pipeline: <b>{self.pipeline.name}</b>
-</p>"""
-        else:
-            ret = ""
-
-        if concurrent_changes:
-            ret += """\
-<p>
-  Other changes tested concurrently with this change:
-  <ul>{concurrent_changes}</ul>
-</p>
-"""
-        if concurrent_builds:
-            ret += """\
-<p>
-  All builds for this change set:
-  <ul>{concurrent_builds}</ul>
-</p>
-"""
-
-        if other_builds:
-            ret += """\
-<p>
-  Other build sets for this change:
-  <ul>{other_builds}</ul>
-</p>
-"""
-        if result:
-            ret += """\
-<p>
-  Reported result: <b>{result}</b>
-</p>
-"""
-
-        ret = ret.format(**locals())
-        return ret
-
-    def reportStats(self, item):
-        if not statsd:
-            return
-        try:
-            # Update the gauge on enqueue and dequeue, but timers only
-            # when dequeing.
-            if item.dequeue_time:
-                dt = int((item.dequeue_time - item.enqueue_time) * 1000)
-            else:
-                dt = None
-            items = len(self.pipeline.getAllItems())
-
-            # stats.timers.zuul.pipeline.NAME.resident_time
-            # stats_counts.zuul.pipeline.NAME.total_changes
-            # stats.gauges.zuul.pipeline.NAME.current_changes
-            key = 'zuul.pipeline.%s' % self.pipeline.name
-            statsd.gauge(key + '.current_changes', items)
-            if dt:
-                statsd.timing(key + '.resident_time', dt)
-                statsd.incr(key + '.total_changes')
-
-            # stats.timers.zuul.pipeline.NAME.ORG.PROJECT.resident_time
-            # stats_counts.zuul.pipeline.NAME.ORG.PROJECT.total_changes
-            project_name = item.change.project.name.replace('/', '.')
-            key += '.%s' % project_name
-            if dt:
-                statsd.timing(key + '.resident_time', dt)
-                statsd.incr(key + '.total_changes')
-        except:
-            self.log.exception("Exception reporting pipeline stats")
-
-
-class DynamicChangeQueueContextManager(object):
-    def __init__(self, change_queue):
-        self.change_queue = change_queue
-
-    def __enter__(self):
-        return self.change_queue
-
-    def __exit__(self, etype, value, tb):
-        if self.change_queue and not self.change_queue.queue:
-            self.change_queue.pipeline.removeQueue(self.change_queue.queue)
-
-
-class IndependentPipelineManager(BasePipelineManager):
-    log = logging.getLogger("zuul.IndependentPipelineManager")
-    changes_merge = False
-
-    def _postConfig(self, layout):
-        super(IndependentPipelineManager, self)._postConfig(layout)
-
-    def getChangeQueue(self, change, existing=None):
-        # creates a new change queue for every change
-        if existing:
-            return DynamicChangeQueueContextManager(existing)
-        if change.project not in self.pipeline.getProjects():
-            self.pipeline.addProject(change.project)
-        change_queue = ChangeQueue(self.pipeline)
-        change_queue.addProject(change.project)
-        self.pipeline.addQueue(change_queue)
-        self.log.debug("Dynamically created queue %s", change_queue)
-        return DynamicChangeQueueContextManager(change_queue)
-
-    def enqueueChangesAhead(self, change, quiet, ignore_requirements,
-                            change_queue):
-        ret = self.checkForChangesNeededBy(change, change_queue)
-        if ret in [True, False]:
-            return ret
-        self.log.debug("  Changes %s must be merged ahead of %s" %
-                       (ret, change))
-        for needed_change in ret:
-            # This differs from the dependent pipeline by enqueuing
-            # changes ahead as "not live", that is, not intended to
-            # have jobs run.  Also, pipeline requirements are always
-            # ignored (which is safe because the changes are not
-            # live).
-            r = self.addChange(needed_change, quiet=True,
-                               ignore_requirements=True,
-                               live=False, change_queue=change_queue)
-            if not r:
-                return False
-        return True
-
-    def checkForChangesNeededBy(self, change, change_queue):
-        if self.pipeline.ignore_dependencies:
-            return True
-        self.log.debug("Checking for changes needed by %s:" % change)
-        # Return true if okay to proceed enqueing this change,
-        # false if the change should not be enqueued.
-        if not hasattr(change, 'needs_changes'):
-            self.log.debug("  Changeish does not support dependencies")
-            return True
-        if not change.needs_changes:
-            self.log.debug("  No changes needed")
-            return True
-        changes_needed = []
-        for needed_change in change.needs_changes:
-            self.log.debug("  Change %s needs change %s:" % (
-                change, needed_change))
-            if needed_change.is_merged:
-                self.log.debug("  Needed change is merged")
-                continue
-            if self.isChangeAlreadyInQueue(needed_change, change_queue):
-                self.log.debug("  Needed change is already ahead in the queue")
-                continue
-            self.log.debug("  Change %s is needed" % needed_change)
-            if needed_change not in changes_needed:
-                changes_needed.append(needed_change)
-                continue
-            # This differs from the dependent pipeline check in not
-            # verifying that the dependent change is mergable.
-        if changes_needed:
-            return changes_needed
-        return True
-
-    def dequeueItem(self, item):
-        super(IndependentPipelineManager, self).dequeueItem(item)
-        # An independent pipeline manager dynamically removes empty
-        # queues
-        if not item.queue.queue:
-            self.pipeline.removeQueue(item.queue)
-
-
-class StaticChangeQueueContextManager(object):
-    def __init__(self, change_queue):
-        self.change_queue = change_queue
-
-    def __enter__(self):
-        return self.change_queue
-
-    def __exit__(self, etype, value, tb):
-        pass
-
-
-class DependentPipelineManager(BasePipelineManager):
-    log = logging.getLogger("zuul.DependentPipelineManager")
-    changes_merge = True
-
-    def __init__(self, *args, **kwargs):
-        super(DependentPipelineManager, self).__init__(*args, **kwargs)
-
-    def _postConfig(self, layout):
-        super(DependentPipelineManager, self)._postConfig(layout)
-        self.buildChangeQueues()
-
-    def buildChangeQueues(self):
-        self.log.debug("Building shared change queues")
-        change_queues = []
-
-        for project in self.pipeline.getProjects():
-            change_queue = ChangeQueue(
-                self.pipeline,
-                window=self.pipeline.window,
-                window_floor=self.pipeline.window_floor,
-                window_increase_type=self.pipeline.window_increase_type,
-                window_increase_factor=self.pipeline.window_increase_factor,
-                window_decrease_type=self.pipeline.window_decrease_type,
-                window_decrease_factor=self.pipeline.window_decrease_factor)
-            change_queue.addProject(project)
-            change_queues.append(change_queue)
-            self.log.debug("Created queue: %s" % change_queue)
-
-        # Iterate over all queues trying to combine them, and keep doing
-        # so until they can not be combined further.
-        last_change_queues = change_queues
-        while True:
-            new_change_queues = self.combineChangeQueues(last_change_queues)
-            if len(last_change_queues) == len(new_change_queues):
-                break
-            last_change_queues = new_change_queues
-
-        self.log.info("  Shared change queues:")
-        for queue in new_change_queues:
-            self.pipeline.addQueue(queue)
-            self.log.info("    %s containing %s" % (
-                queue, queue.generated_name))
-
-    def combineChangeQueues(self, change_queues):
-        self.log.debug("Combining shared queues")
-        new_change_queues = []
-        for a in change_queues:
-            merged_a = False
-            for b in new_change_queues:
-                if not a.getJobs().isdisjoint(b.getJobs()):
-                    self.log.debug("Merging queue %s into %s" % (a, b))
-                    b.mergeChangeQueue(a)
-                    merged_a = True
-                    break  # this breaks out of 'for b' and continues 'for a'
-            if not merged_a:
-                self.log.debug("Keeping queue %s" % (a))
-                new_change_queues.append(a)
-        return new_change_queues
-
-    def getChangeQueue(self, change, existing=None):
-        if existing:
-            return StaticChangeQueueContextManager(existing)
-        return StaticChangeQueueContextManager(
-            self.pipeline.getQueue(change.project))
-
-    def isChangeReadyToBeEnqueued(self, change):
-        if not self.pipeline.source.canMerge(change,
-                                             self.getSubmitAllowNeeds()):
-            self.log.debug("Change %s can not merge, ignoring" % change)
-            return False
-        return True
-
-    def enqueueChangesBehind(self, change, quiet, ignore_requirements,
-                             change_queue):
-        to_enqueue = []
-        self.log.debug("Checking for changes needing %s:" % change)
-        if not hasattr(change, 'needed_by_changes'):
-            self.log.debug("  Changeish does not support dependencies")
-            return
-        for other_change in change.needed_by_changes:
-            with self.getChangeQueue(other_change) as other_change_queue:
-                if other_change_queue != change_queue:
-                    self.log.debug("  Change %s in project %s can not be "
-                                   "enqueued in the target queue %s" %
-                                   (other_change, other_change.project,
-                                    change_queue))
-                    continue
-            if self.pipeline.source.canMerge(other_change,
-                                             self.getSubmitAllowNeeds()):
-                self.log.debug("  Change %s needs %s and is ready to merge" %
-                               (other_change, change))
-                to_enqueue.append(other_change)
-
-        if not to_enqueue:
-            self.log.debug("  No changes need %s" % change)
-
-        for other_change in to_enqueue:
-            self.addChange(other_change, quiet=quiet,
-                           ignore_requirements=ignore_requirements,
-                           change_queue=change_queue)
-
-    def enqueueChangesAhead(self, change, quiet, ignore_requirements,
-                            change_queue):
-        ret = self.checkForChangesNeededBy(change, change_queue)
-        if ret in [True, False]:
-            return ret
-        self.log.debug("  Changes %s must be merged ahead of %s" %
-                       (ret, change))
-        for needed_change in ret:
-            r = self.addChange(needed_change, quiet=quiet,
-                               ignore_requirements=ignore_requirements,
-                               change_queue=change_queue)
-            if not r:
-                return False
-        return True
-
-    def checkForChangesNeededBy(self, change, change_queue):
-        self.log.debug("Checking for changes needed by %s:" % change)
-        # Return true if okay to proceed enqueing this change,
-        # false if the change should not be enqueued.
-        if not hasattr(change, 'needs_changes'):
-            self.log.debug("  Changeish does not support dependencies")
-            return True
-        if not change.needs_changes:
-            self.log.debug("  No changes needed")
-            return True
-        changes_needed = []
-        # Ignore supplied change_queue
-        with self.getChangeQueue(change) as change_queue:
-            for needed_change in change.needs_changes:
-                self.log.debug("  Change %s needs change %s:" % (
-                    change, needed_change))
-                if needed_change.is_merged:
-                    self.log.debug("  Needed change is merged")
-                    continue
-                with self.getChangeQueue(needed_change) as needed_change_queue:
-                    if needed_change_queue != change_queue:
-                        self.log.debug("  Change %s in project %s does not "
-                                       "share a change queue with %s "
-                                       "in project %s" %
-                                       (needed_change, needed_change.project,
-                                        change, change.project))
-                        return False
-                if not needed_change.is_current_patchset:
-                    self.log.debug("  Needed change is not the "
-                                   "current patchset")
-                    return False
-                if self.isChangeAlreadyInQueue(needed_change, change_queue):
-                    self.log.debug("  Needed change is already ahead "
-                                   "in the queue")
-                    continue
-                if self.pipeline.source.canMerge(needed_change,
-                                                 self.getSubmitAllowNeeds()):
-                    self.log.debug("  Change %s is needed" % needed_change)
-                    if needed_change not in changes_needed:
-                        changes_needed.append(needed_change)
-                        continue
-                # The needed change can't be merged.
-                self.log.debug("  Change %s is needed but can not be merged" %
-                               needed_change)
-                return False
-        if changes_needed:
-            return changes_needed
-        return True
-
-    def getFailingDependentItems(self, item):
-        if not hasattr(item.change, 'needs_changes'):
-            return None
-        if not item.change.needs_changes:
-            return None
-        failing_items = set()
-        for needed_change in item.change.needs_changes:
-            needed_item = self.getItemForChange(needed_change)
-            if not needed_item:
-                continue
-            if needed_item.current_build_set.failing_reasons:
-                failing_items.add(needed_item)
-        if failing_items:
-            return failing_items
-        return None
diff --git a/zuul/source/__init__.py b/zuul/source/__init__.py
index cb4501a..69dc162 100644
--- a/zuul/source/__init__.py
+++ b/zuul/source/__init__.py
@@ -27,14 +27,10 @@
 
     Defines the exact public methods that must be supplied."""
 
-    def __init__(self, source_config={}, sched=None, connection=None):
+    def __init__(self, source_config={}, connection=None):
         self.source_config = source_config
-        self.sched = sched
         self.connection = connection
 
-    def stop(self):
-        """Stop the source."""
-
     @abc.abstractmethod
     def getRefSha(self, project, ref):
         """Return a sha for a given project ref."""
@@ -53,7 +49,7 @@
         """Called after configuration has been processed."""
 
     @abc.abstractmethod
-    def getChange(self, event, project):
+    def getChange(self, event):
         """Get the change representing an event."""
 
     @abc.abstractmethod
@@ -63,3 +59,11 @@
     @abc.abstractmethod
     def getGitUrl(self, project):
         """Get the git url for a project."""
+
+    @abc.abstractmethod
+    def getProject(self, name):
+        """Get a project."""
+
+    @abc.abstractmethod
+    def getProjectBranches(self, project):
+        """Get branches for a project"""
diff --git a/zuul/source/gerrit.py b/zuul/source/gerrit.py
deleted file mode 100644
index 828e201..0000000
--- a/zuul/source/gerrit.py
+++ /dev/null
@@ -1,353 +0,0 @@
-# Copyright 2012 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import logging
-import re
-import time
-from zuul import exceptions
-from zuul.model import Change, Ref, NullChange
-from zuul.source import BaseSource
-
-
-# Walk the change dependency tree to find a cycle
-def detect_cycle(change, history=None):
-    if history is None:
-        history = []
-    else:
-        history = history[:]
-    history.append(change.number)
-    for dep in change.needs_changes:
-        if dep.number in history:
-            raise Exception("Dependency cycle detected: %s in %s" % (
-                dep.number, history))
-        detect_cycle(dep, history)
-
-
-class GerritSource(BaseSource):
-    name = 'gerrit'
-    log = logging.getLogger("zuul.GerritSource")
-    replication_timeout = 300
-    replication_retry_interval = 5
-
-    depends_on_re = re.compile(r"^Depends-On: (I[0-9a-f]{40})\s*$",
-                               re.MULTILINE | re.IGNORECASE)
-
-    def getRefSha(self, project, ref):
-        refs = {}
-        try:
-            refs = self.connection.getInfoRefs(project)
-        except:
-            self.log.exception("Exception looking for ref %s" %
-                               ref)
-        sha = refs.get(ref, '')
-        return sha
-
-    def _waitForRefSha(self, project, ref, old_sha=''):
-        # Wait for the ref to show up in the repo
-        start = time.time()
-        while time.time() - start < self.replication_timeout:
-            sha = self.getRefSha(project.name, ref)
-            if old_sha != sha:
-                return True
-            time.sleep(self.replication_retry_interval)
-        return False
-
-    def isMerged(self, change, head=None):
-        self.log.debug("Checking if change %s is merged" % change)
-        if not change.number:
-            self.log.debug("Change has no number; considering it merged")
-            # Good question.  It's probably ref-updated, which, ah,
-            # means it's merged.
-            return True
-
-        data = self.connection.query(change.number)
-        change._data = data
-        change.is_merged = self._isMerged(change)
-        if change.is_merged:
-            self.log.debug("Change %s is merged" % (change,))
-        else:
-            self.log.debug("Change %s is not merged" % (change,))
-        if not head:
-            return change.is_merged
-        if not change.is_merged:
-            return False
-
-        ref = 'refs/heads/' + change.branch
-        self.log.debug("Waiting for %s to appear in git repo" % (change))
-        if self._waitForRefSha(change.project, ref, change._ref_sha):
-            self.log.debug("Change %s is in the git repo" %
-                           (change))
-            return True
-        self.log.debug("Change %s did not appear in the git repo" %
-                       (change))
-        return False
-
-    def _isMerged(self, change):
-        data = change._data
-        if not data:
-            return False
-        status = data.get('status')
-        if not status:
-            return False
-        if status == 'MERGED':
-            return True
-        return False
-
-    def canMerge(self, change, allow_needs):
-        if not change.number:
-            self.log.debug("Change has no number; considering it merged")
-            # Good question.  It's probably ref-updated, which, ah,
-            # means it's merged.
-            return True
-        data = change._data
-        if not data:
-            return False
-        if 'submitRecords' not in data:
-            return False
-        try:
-            for sr in data['submitRecords']:
-                if sr['status'] == 'OK':
-                    return True
-                elif sr['status'] == 'NOT_READY':
-                    for label in sr['labels']:
-                        if label['status'] in ['OK', 'MAY']:
-                            continue
-                        elif label['status'] in ['NEED', 'REJECT']:
-                            # It may be our own rejection, so we ignore
-                            if label['label'].lower() not in allow_needs:
-                                return False
-                            continue
-                        else:
-                            # IMPOSSIBLE
-                            return False
-                else:
-                    # CLOSED, RULE_ERROR
-                    return False
-        except:
-            self.log.exception("Exception determining whether change"
-                               "%s can merge:" % change)
-            return False
-        return True
-
-    def postConfig(self):
-        pass
-
-    def getChange(self, event, project):
-        if event.change_number:
-            refresh = False
-            change = self._getChange(event.change_number, event.patch_number,
-                                     refresh=refresh)
-        elif event.ref:
-            change = Ref(project)
-            change.ref = event.ref
-            change.oldrev = event.oldrev
-            change.newrev = event.newrev
-            change.url = self._getGitwebUrl(project, sha=event.newrev)
-        else:
-            change = NullChange(project)
-        return change
-
-    def _getChange(self, number, patchset, refresh=False, history=None):
-        key = '%s,%s' % (number, patchset)
-        change = self.connection.getCachedChange(key)
-        if change and not refresh:
-            return change
-        if not change:
-            change = Change(None)
-            change.number = number
-            change.patchset = patchset
-        key = '%s,%s' % (change.number, change.patchset)
-        self.connection.updateChangeCache(key, change)
-        try:
-            self._updateChange(change, history)
-        except Exception:
-            self.connection.deleteCachedChange(key)
-            raise
-        return change
-
-    def getProjectOpenChanges(self, project):
-        # This is a best-effort function in case Gerrit is unable to return
-        # a particular change.  It happens.
-        query = "project:%s status:open" % (project.name,)
-        self.log.debug("Running query %s to get project open changes" %
-                       (query,))
-        data = self.connection.simpleQuery(query)
-        changes = []
-        for record in data:
-            try:
-                changes.append(
-                    self._getChange(record['number'],
-                                    record['currentPatchSet']['number']))
-            except Exception:
-                self.log.exception("Unable to query change %s" %
-                                   (record.get('number'),))
-        return changes
-
-    def _getDependsOnFromCommit(self, message, change):
-        records = []
-        seen = set()
-        for match in self.depends_on_re.findall(message):
-            if match in seen:
-                self.log.debug("Ignoring duplicate Depends-On: %s" %
-                               (match,))
-                continue
-            seen.add(match)
-            query = "change:%s" % (match,)
-            self.log.debug("Updating %s: Running query %s "
-                           "to find needed changes" %
-                           (change, query,))
-            records.extend(self.connection.simpleQuery(query))
-        return records
-
-    def _getNeededByFromCommit(self, change_id, change):
-        records = []
-        seen = set()
-        query = 'message:%s' % change_id
-        self.log.debug("Updating %s: Running query %s "
-                       "to find changes needed-by" %
-                       (change, query,))
-        results = self.connection.simpleQuery(query)
-        for result in results:
-            for match in self.depends_on_re.findall(
-                result['commitMessage']):
-                if match != change_id:
-                    continue
-                key = (result['number'], result['currentPatchSet']['number'])
-                if key in seen:
-                    continue
-                self.log.debug("Updating %s: Found change %s,%s "
-                               "needs %s from commit" %
-                               (change, key[0], key[1], change_id))
-                seen.add(key)
-                records.append(result)
-        return records
-
-    def _updateChange(self, change, history=None):
-        self.log.info("Updating %s" % (change,))
-        data = self.connection.query(change.number)
-        change._data = data
-
-        if change.patchset is None:
-            change.patchset = data['currentPatchSet']['number']
-
-        if 'project' not in data:
-            raise exceptions.ChangeNotFound(change.number, change.patchset)
-        change.project = self.sched.getProject(data['project'])
-        change.branch = data['branch']
-        change.url = data['url']
-        max_ps = 0
-        files = []
-        for ps in data['patchSets']:
-            if ps['number'] == change.patchset:
-                change.refspec = ps['ref']
-                for f in ps.get('files', []):
-                    files.append(f['file'])
-            if int(ps['number']) > int(max_ps):
-                max_ps = ps['number']
-        if max_ps == change.patchset:
-            change.is_current_patchset = True
-        else:
-            change.is_current_patchset = False
-        change.files = files
-
-        change.is_merged = self._isMerged(change)
-        change.approvals = data['currentPatchSet'].get('approvals', [])
-        change.open = data['open']
-        change.status = data['status']
-        change.owner = data['owner']
-
-        if change.is_merged:
-            # This change is merged, so we don't need to look any further
-            # for dependencies.
-            self.log.debug("Updating %s: change is merged" % (change,))
-            return change
-
-        if history is None:
-            history = []
-        else:
-            history = history[:]
-        history.append(change.number)
-
-        needs_changes = []
-        if 'dependsOn' in data:
-            parts = data['dependsOn'][0]['ref'].split('/')
-            dep_num, dep_ps = parts[3], parts[4]
-            if dep_num in history:
-                raise Exception("Dependency cycle detected: %s in %s" % (
-                    dep_num, history))
-            self.log.debug("Updating %s: Getting git-dependent change %s,%s" %
-                           (change, dep_num, dep_ps))
-            dep = self._getChange(dep_num, dep_ps, history=history)
-            # Because we are not forcing a refresh in _getChange, it
-            # may return without executing this code, so if we are
-            # updating our change to add ourselves to a dependency
-            # cycle, we won't detect it.  By explicitly performing a
-            # walk of the dependency tree, we will.
-            detect_cycle(dep, history)
-            if (not dep.is_merged) and dep not in needs_changes:
-                needs_changes.append(dep)
-
-        for record in self._getDependsOnFromCommit(data['commitMessage'],
-                                                   change):
-            dep_num = record['number']
-            dep_ps = record['currentPatchSet']['number']
-            if dep_num in history:
-                raise Exception("Dependency cycle detected: %s in %s" % (
-                    dep_num, history))
-            self.log.debug("Updating %s: Getting commit-dependent "
-                           "change %s,%s" %
-                           (change, dep_num, dep_ps))
-            dep = self._getChange(dep_num, dep_ps, history=history)
-            # Because we are not forcing a refresh in _getChange, it
-            # may return without executing this code, so if we are
-            # updating our change to add ourselves to a dependency
-            # cycle, we won't detect it.  By explicitly performing a
-            # walk of the dependency tree, we will.
-            detect_cycle(dep, history)
-            if (not dep.is_merged) and dep not in needs_changes:
-                needs_changes.append(dep)
-        change.needs_changes = needs_changes
-
-        needed_by_changes = []
-        if 'neededBy' in data:
-            for needed in data['neededBy']:
-                parts = needed['ref'].split('/')
-                dep_num, dep_ps = parts[3], parts[4]
-                self.log.debug("Updating %s: Getting git-needed change %s,%s" %
-                               (change, dep_num, dep_ps))
-                dep = self._getChange(dep_num, dep_ps)
-                if (not dep.is_merged) and dep.is_current_patchset:
-                    needed_by_changes.append(dep)
-
-        for record in self._getNeededByFromCommit(data['id'], change):
-            dep_num = record['number']
-            dep_ps = record['currentPatchSet']['number']
-            self.log.debug("Updating %s: Getting commit-needed change %s,%s" %
-                           (change, dep_num, dep_ps))
-            # Because a commit needed-by may be a cross-repo
-            # dependency, cause that change to refresh so that it will
-            # reference the latest patchset of its Depends-On (this
-            # change).
-            dep = self._getChange(dep_num, dep_ps, refresh=True)
-            if (not dep.is_merged) and dep.is_current_patchset:
-                needed_by_changes.append(dep)
-        change.needed_by_changes = needed_by_changes
-
-        return change
-
-    def getGitUrl(self, project):
-        return self.connection.getGitUrl(project)
-
-    def _getGitwebUrl(self, project, sha=None):
-        return self.connection.getGitwebUrl(project, sha)
diff --git a/zuul/trigger/__init__.py b/zuul/trigger/__init__.py
index 16fb0b1..a5406d6 100644
--- a/zuul/trigger/__init__.py
+++ b/zuul/trigger/__init__.py
@@ -23,20 +23,17 @@
 
     Defines the exact public methods that must be supplied."""
 
-    def __init__(self, trigger_config={}, sched=None, connection=None):
-        self.trigger_config = trigger_config
-        self.sched = sched
+    def __init__(self, driver, connection, config=None):
+        self.driver = driver
         self.connection = connection
-
-    def stop(self):
-        """Stop the trigger."""
+        self.config = config or {}
 
     @abc.abstractmethod
     def getEventFilters(self, trigger_conf):
         """Return a list of EventFilter's for the scheduler to match against.
         """
 
-    def postConfig(self):
+    def postConfig(self, pipeline):
         """Called after config is loaded."""
 
     def onChangeMerged(self, change, source):
diff --git a/zuul/trigger/timer.py b/zuul/trigger/timer.py
deleted file mode 100644
index f982914..0000000
--- a/zuul/trigger/timer.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright 2012 Hewlett-Packard Development Company, L.P.
-# Copyright 2013 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from apscheduler.schedulers.background import BackgroundScheduler
-from apscheduler.triggers.cron import CronTrigger
-import logging
-import voluptuous as v
-from zuul.model import EventFilter, TriggerEvent
-from zuul.trigger import BaseTrigger
-
-
-class TimerTrigger(BaseTrigger):
-    name = 'timer'
-    log = logging.getLogger("zuul.TimerTrigger")
-
-    def __init__(self, trigger_config={}, sched=None, connection=None):
-        super(TimerTrigger, self).__init__(trigger_config, sched, connection)
-        self.apsched = BackgroundScheduler()
-        self.apsched.start()
-
-    def _onTrigger(self, pipeline_name, timespec):
-        for project in self.sched.layout.projects.values():
-            event = TriggerEvent()
-            event.type = 'timer'
-            event.timespec = timespec
-            event.forced_pipeline = pipeline_name
-            event.project_name = project.name
-            self.log.debug("Adding event %s" % event)
-            self.sched.addEvent(event)
-
-    def stop(self):
-        self.apsched.shutdown()
-
-    def getEventFilters(self, trigger_conf):
-        def toList(item):
-            if not item:
-                return []
-            if isinstance(item, list):
-                return item
-            return [item]
-
-        efilters = []
-        for trigger in toList(trigger_conf):
-            f = EventFilter(trigger=self,
-                            types=['timer'],
-                            timespecs=toList(trigger['time']))
-
-            efilters.append(f)
-
-        return efilters
-
-    def postConfig(self):
-        for job in self.apsched.get_jobs():
-            job.remove()
-        for pipeline in self.sched.layout.pipelines.values():
-            for ef in pipeline.manager.event_filters:
-                if ef.trigger != self:
-                    continue
-                for timespec in ef.timespecs:
-                    parts = timespec.split()
-                    if len(parts) < 5 or len(parts) > 6:
-                        self.log.error(
-                            "Unable to parse time value '%s' "
-                            "defined in pipeline %s" % (
-                                timespec,
-                                pipeline.name))
-                        continue
-                    minute, hour, dom, month, dow = parts[:5]
-                    if len(parts) > 5:
-                        second = parts[5]
-                    else:
-                        second = None
-                    trigger = CronTrigger(day=dom, day_of_week=dow, hour=hour,
-                                          minute=minute, second=second)
-
-                    self.apsched.add_job(self._onTrigger, trigger=trigger,
-                                         args=(pipeline.name, timespec,))
-
-
-def getSchema():
-    timer_trigger = {v.Required('time'): str}
-    return timer_trigger
diff --git a/zuul/trigger/zuultrigger.py b/zuul/trigger/zuultrigger.py
deleted file mode 100644
index 00b21f2..0000000
--- a/zuul/trigger/zuultrigger.py
+++ /dev/null
@@ -1,148 +0,0 @@
-# Copyright 2012-2014 Hewlett-Packard Development Company, L.P.
-# Copyright 2013 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import logging
-import voluptuous as v
-from zuul.model import EventFilter, TriggerEvent
-from zuul.trigger import BaseTrigger
-
-
-class ZuulTrigger(BaseTrigger):
-    name = 'zuul'
-    log = logging.getLogger("zuul.ZuulTrigger")
-
-    def __init__(self, trigger_config={}, sched=None, connection=None):
-        super(ZuulTrigger, self).__init__(trigger_config, sched, connection)
-        self._handle_parent_change_enqueued_events = False
-        self._handle_project_change_merged_events = False
-
-    def getEventFilters(self, trigger_conf):
-        def toList(item):
-            if not item:
-                return []
-            if isinstance(item, list):
-                return item
-            return [item]
-
-        efilters = []
-        for trigger in toList(trigger_conf):
-            f = EventFilter(
-                trigger=self,
-                types=toList(trigger['event']),
-                pipelines=toList(trigger.get('pipeline')),
-                required_approvals=(
-                    toList(trigger.get('require-approval'))
-                ),
-                reject_approvals=toList(
-                    trigger.get('reject-approval')
-                ),
-            )
-            efilters.append(f)
-
-        return efilters
-
-    def onChangeMerged(self, change, source):
-        # Called each time zuul merges a change
-        if self._handle_project_change_merged_events:
-            try:
-                self._createProjectChangeMergedEvents(change, source)
-            except Exception:
-                self.log.exception(
-                    "Unable to create project-change-merged events for "
-                    "%s" % (change,))
-
-    def onChangeEnqueued(self, change, pipeline):
-        # Called each time a change is enqueued in a pipeline
-        if self._handle_parent_change_enqueued_events:
-            try:
-                self._createParentChangeEnqueuedEvents(change, pipeline)
-            except Exception:
-                self.log.exception(
-                    "Unable to create parent-change-enqueued events for "
-                    "%s in %s" % (change, pipeline))
-
-    def _createProjectChangeMergedEvents(self, change, source):
-        changes = source.getProjectOpenChanges(
-            change.project)
-        for open_change in changes:
-            self._createProjectChangeMergedEvent(open_change)
-
-    def _createProjectChangeMergedEvent(self, change):
-        event = TriggerEvent()
-        event.type = 'project-change-merged'
-        event.trigger_name = self.name
-        event.project_name = change.project.name
-        event.change_number = change.number
-        event.branch = change.branch
-        event.change_url = change.url
-        event.patch_number = change.patchset
-        event.refspec = change.refspec
-        self.sched.addEvent(event)
-
-    def _createParentChangeEnqueuedEvents(self, change, pipeline):
-        self.log.debug("Checking for changes needing %s:" % change)
-        if not hasattr(change, 'needed_by_changes'):
-            self.log.debug("  Changeish does not support dependencies")
-            return
-        for needs in change.needed_by_changes:
-            self._createParentChangeEnqueuedEvent(needs, pipeline)
-
-    def _createParentChangeEnqueuedEvent(self, change, pipeline):
-        event = TriggerEvent()
-        event.type = 'parent-change-enqueued'
-        event.trigger_name = self.name
-        event.pipeline_name = pipeline.name
-        event.project_name = change.project.name
-        event.change_number = change.number
-        event.branch = change.branch
-        event.change_url = change.url
-        event.patch_number = change.patchset
-        event.refspec = change.refspec
-        self.sched.addEvent(event)
-
-    def postConfig(self):
-        self._handle_parent_change_enqueued_events = False
-        self._handle_project_change_merged_events = False
-        for pipeline in self.sched.layout.pipelines.values():
-            for ef in pipeline.manager.event_filters:
-                if ef.trigger != self:
-                    continue
-                if 'parent-change-enqueued' in ef._types:
-                    self._handle_parent_change_enqueued_events = True
-                elif 'project-change-merged' in ef._types:
-                    self._handle_project_change_merged_events = True
-
-
-def getSchema():
-    def toList(x):
-        return v.Any([x], x)
-
-    approval = v.Schema({'username': str,
-                         'email-filter': str,
-                         'email': str,
-                         'older-than': str,
-                         'newer-than': str,
-                         }, extra=True)
-
-    zuul_trigger = {
-        v.Required('event'):
-        toList(v.Any('parent-change-enqueued',
-                     'project-change-merged')),
-        'pipeline': toList(str),
-        'require-approval': toList(approval),
-        'reject-approval': toList(approval),
-    }
-
-    return zuul_trigger
diff --git a/zuul/webapp.py b/zuul/webapp.py
index c1c848b..e16f0b4 100644
--- a/zuul/webapp.py
+++ b/zuul/webapp.py
@@ -51,7 +51,7 @@
         self.port = port
         self.cache_expiry = cache_expiry
         self.cache_time = 0
-        self.cache = None
+        self.cache = {}
         self.daemon = True
         self.server = httpserver.serve(
             dec.wsgify(self.app), host=self.listen_address, port=self.port,
@@ -63,7 +63,7 @@
     def stop(self):
         self.server.server_close()
 
-    def _changes_by_func(self, func):
+    def _changes_by_func(self, func, tenant_name):
         """Filter changes by a user provided function.
 
         In order to support arbitrary collection of subsets of changes
@@ -72,7 +72,7 @@
         is a flattened list of those collected changes.
         """
         status = []
-        jsonstruct = json.loads(self.cache)
+        jsonstruct = json.loads(self.cache[tenant_name])
         for pipeline in jsonstruct['pipelines']:
             for change_queue in pipeline['change_queues']:
                 for head in change_queue['heads']:
@@ -81,11 +81,11 @@
                             status.append(copy.deepcopy(change))
         return json.dumps(status)
 
-    def _status_for_change(self, rev):
+    def _status_for_change(self, rev, tenant_name):
         """Return the statuses for a particular change id X,Y."""
         def func(change):
             return change['id'] == rev
-        return self._changes_by_func(func)
+        return self._changes_by_func(func, tenant_name)
 
     def _normalize_path(self, path):
         # support legacy status.json as well as new /status
@@ -97,14 +97,17 @@
         return None
 
     def app(self, request):
-        path = self._normalize_path(request.path)
+        tenant_name = request.path.split('/')[1]
+        path = request.path.replace('/' + tenant_name, '')
+        path = self._normalize_path(path)
         if path is None:
             raise webob.exc.HTTPNotFound()
 
-        if (not self.cache or
+        if (tenant_name not in self.cache or
             (time.time() - self.cache_time) > self.cache_expiry):
             try:
-                self.cache = self.scheduler.formatStatusJSON()
+                self.cache[tenant_name] = self.scheduler.formatStatusJSON(
+                    tenant_name)
                 # Call time.time() again because formatting above may take
                 # longer than the cache timeout.
                 self.cache_time = time.time()
@@ -113,10 +116,10 @@
                 raise
 
         if path == 'status':
-            response = webob.Response(body=self.cache,
+            response = webob.Response(body=self.cache[tenant_name],
                                       content_type='application/json')
         else:
-            status = self._status_for_change(path)
+            status = self._status_for_change(path, tenant_name)
             if status:
                 response = webob.Response(body=status,
                                           content_type='application/json')
diff --git a/zuul/zk.py b/zuul/zk.py
new file mode 100644
index 0000000..2009945
--- /dev/null
+++ b/zuul/zk.py
@@ -0,0 +1,280 @@
+#!/usr/bin/env python
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import json
+import logging
+import time
+from kazoo.client import KazooClient, KazooState
+from kazoo import exceptions as kze
+from kazoo.recipe.lock import Lock
+
+# States:
+# We are building this node but it is not ready for use.
+BUILDING = 'building'
+# The node is ready for use.
+READY = 'ready'
+# The node should be deleted.
+DELETING = 'deleting'
+
+STATES = set([BUILDING, READY, DELETING])
+
+
+class LockException(Exception):
+    pass
+
+
+class ZooKeeperConnectionConfig(object):
+    '''
+    Represents the connection parameters for a ZooKeeper server.
+    '''
+
+    def __eq__(self, other):
+        if isinstance(other, ZooKeeperConnectionConfig):
+            if other.__dict__ == self.__dict__:
+                return True
+        return False
+
+    def __init__(self, host, port=2181, chroot=None):
+        '''Initialize the ZooKeeperConnectionConfig object.
+
+        :param str host: The hostname of the ZooKeeper server.
+        :param int port: The port on which ZooKeeper is listening.
+            Optional, default: 2181.
+        :param str chroot: A chroot for this connection.  All
+            ZooKeeper nodes will be underneath this root path.
+            Optional, default: None.
+
+        (one per server) defining the ZooKeeper cluster servers. Only
+        the 'host' attribute is required.'.
+
+        '''
+        self.host = host
+        self.port = port
+        self.chroot = chroot or ''
+
+
+class ZooKeeper(object):
+    '''
+    Class implementing the ZooKeeper interface.
+
+    This class uses the facade design pattern to keep common interaction
+    with the ZooKeeper API simple and consistent for the caller, and
+    limits coupling between objects. It allows for more complex interactions
+    by providing direct access to the client connection when needed (though
+    that is discouraged). It also provides for a convenient entry point for
+    testing only ZooKeeper interactions.
+    '''
+
+    log = logging.getLogger("zuul.zk.ZooKeeper")
+
+    REQUEST_ROOT = '/nodepool/requests'
+    NODE_ROOT = '/nodepool/nodes'
+
+    def __init__(self):
+        '''
+        Initialize the ZooKeeper object.
+        '''
+        self.client = None
+        self._became_lost = False
+
+    def _dictToStr(self, data):
+        return json.dumps(data)
+
+    def _strToDict(self, data):
+        return json.loads(data)
+
+    def _connection_listener(self, state):
+        '''
+        Listener method for Kazoo connection state changes.
+
+        .. warning:: This method must not block.
+        '''
+        if state == KazooState.LOST:
+            self.log.debug("ZooKeeper connection: LOST")
+            self._became_lost = True
+        elif state == KazooState.SUSPENDED:
+            self.log.debug("ZooKeeper connection: SUSPENDED")
+        else:
+            self.log.debug("ZooKeeper connection: CONNECTED")
+
+    @property
+    def connected(self):
+        return self.client.state == KazooState.CONNECTED
+
+    @property
+    def suspended(self):
+        return self.client.state == KazooState.SUSPENDED
+
+    @property
+    def lost(self):
+        return self.client.state == KazooState.LOST
+
+    @property
+    def didLoseConnection(self):
+        return self._became_lost
+
+    def resetLostFlag(self):
+        self._became_lost = False
+
+    def connect(self, host_list, read_only=False):
+        '''
+        Establish a connection with ZooKeeper cluster.
+
+        Convenience method if a pre-existing ZooKeeper connection is not
+        supplied to the ZooKeeper object at instantiation time.
+
+        :param list host_list: A list of
+            :py:class:`~nodepool.zk.ZooKeeperConnectionConfig` objects
+            (one per server) defining the ZooKeeper cluster servers.
+        :param bool read_only: If True, establishes a read-only connection.
+
+        '''
+        if self.client is None:
+            self.client = KazooClient(hosts=host_list, read_only=read_only)
+            self.client.add_listener(self._connection_listener)
+            self.client.start()
+
+    def disconnect(self):
+        '''
+        Close the ZooKeeper cluster connection.
+
+        You should call this method if you used connect() to establish a
+        cluster connection.
+        '''
+        if self.client is not None and self.client.connected:
+            self.client.stop()
+            self.client.close()
+            self.client = None
+
+    def resetHosts(self, host_list):
+        '''
+        Reset the ZooKeeper cluster connection host list.
+
+        :param list host_list: A list of
+            :py:class:`~nodepool.zk.ZooKeeperConnectionConfig` objects
+            (one per server) defining the ZooKeeper cluster servers.
+        '''
+        if self.client is not None:
+            self.client.set_hosts(hosts=host_list)
+
+    def submitNodeRequest(self, node_request, watcher):
+        '''
+        Submit a request for nodes to Nodepool.
+
+        :param NodeRequest node_request: A NodeRequest with the
+            contents of the request.
+
+        :param callable watcher: A callable object that will be
+            invoked each time the request is updated.  It is called
+            with two arguments: (node_request, deleted) where
+            node_request is the same argument passed to this method,
+            and deleted is a boolean which is True if the node no
+            longer exists (notably, this will happen on disconnection
+            from ZooKeeper).  The watcher should return False when
+            further updates are no longer necessary.
+        '''
+        priority = 100  # TODO(jeblair): integrate into nodereq
+
+        data = node_request.toDict()
+        data['created_time'] = time.time()
+
+        path = '%s/%s-' % (self.REQUEST_ROOT, priority)
+        path = self.client.create(path, self._dictToStr(data),
+                                  makepath=True,
+                                  sequence=True, ephemeral=True)
+        reqid = path.split("/")[-1]
+        node_request.id = reqid
+
+        def callback(data, stat):
+            if data:
+                data = self._strToDict(data)
+                node_request.updateFromDict(data)
+                request_nodes = node_request.nodeset.getNodes()
+                for i, nodeid in enumerate(data.get('nodes', [])):
+                    node_path = '%s/%s' % (self.NODE_ROOT, nodeid)
+                    node_data, node_stat = self.client.get(node_path)
+                    node_data = self._strToDict(node_data)
+                    request_nodes[i].id = nodeid
+                    request_nodes[i].updateFromDict(node_data)
+            deleted = (data is None)  # data *are* none
+            return watcher(node_request, deleted)
+
+        self.client.DataWatch(path, callback)
+
+    def deleteNodeRequest(self, node_request):
+        '''
+        Delete a request for nodes.
+
+        :param NodeRequest node_request: A NodeRequest with the
+            contents of the request.
+        '''
+
+        path = '%s/%s' % (self.REQUEST_ROOT, node_request.id)
+        try:
+            self.client.delete(path)
+        except kze.NoNodeError:
+            pass
+
+    def storeNode(self, node):
+        '''Store the node.
+
+        The node is expected to already exist and is updated in its
+        entirety.
+
+        :param Node node: The node to update.
+        '''
+
+        path = '%s/%s' % (self.NODE_ROOT, node.id)
+        self.client.set(path, self._dictToStr(node.toDict()))
+
+    def lockNode(self, node, blocking=True, timeout=None):
+        '''
+        Lock a node.
+
+        This should be called as soon as a request is fulfilled and
+        the lock held for as long as the node is in-use.  It can be
+        used by nodepool to detect if Zuul has gone offline and the
+        node should be reclaimed.
+
+        :param Node node: The node which should be locked.
+        '''
+
+        lock_path = '%s/%s/lock' % (self.NODE_ROOT, node.id)
+        try:
+            lock = Lock(self.client, lock_path)
+            have_lock = lock.acquire(blocking, timeout)
+        except kze.LockTimeout:
+            raise LockException(
+                "Timeout trying to acquire lock %s" % lock_path)
+
+        # If we aren't blocking, it's possible we didn't get the lock
+        # because someone else has it.
+        if not have_lock:
+            raise LockException("Did not get lock on %s" % lock_path)
+
+        node.lock = lock
+
+    def unlockNode(self, node):
+        '''
+        Unlock a node.
+
+        The node must already have been locked.
+
+        :param Node node: The node which should be unlocked.
+        '''
+
+        if node.lock is None:
+            raise LockException("Node %s does not hold a lock" % (node,))
+        node.lock.release()
+        node.lock = None