Merge "Re-enable test_tags" into feature/zuulv3
diff --git a/.gitignore b/.gitignore
index f516785..d6a7477 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,6 +2,7 @@
 *.egg
 *.egg-info
 *.pyc
+.idea
 .test
 .testrepository
 .tox
diff --git a/README.rst b/README.rst
index 697d994..425a665 100644
--- a/README.rst
+++ b/README.rst
@@ -88,6 +88,14 @@
 
 7) Check storyboard for status of current work items: https://storyboard.openstack.org/#!/board/41
 
+   Work items tagged with ``low-hanging-fruit`` are tasks that have
+   been identified as not requiring an expansive knowledge of the
+   system.  They may still require either some knowledge or
+   investigation into a specific area, but should be suitable for a
+   developer who is becoming acquainted with the system.  Those items
+   can be found at:
+   https://storyboard.openstack.org/#!/story/list?tags=low-hanging-fruit&tags=zuulv3
+
 Once you are up to speed on those items, it will be helpful to know
 the following:
 
@@ -129,14 +137,11 @@
 Roadmap
 -------
 
-* Implement Zookeeper for Nodepool builders and begin using this in
-  OpenStack Infra
-* Implement Zookeeper for Nodepool launchers
+* Begin using Zuul v3 to run jobs for Zuul itself
 * Implement a shim to translate Zuul v2 demand into Nodepool Zookeeper
   launcher requests
 * Begin using Zookeeper based Nodepool launchers with Zuul v2.5 in
   OpenStack Infra
-* Begin using Zuul v3 to run jobs for Zuul itself
 * Move OpenStack Infra to use Zuul v3
 * Implement Github support
 * Begin using Zuul v3 to run tests on Ansible repos
diff --git a/bindep.txt b/bindep.txt
index b0c4c3b..8d8c45b 100644
--- a/bindep.txt
+++ b/bindep.txt
@@ -1,2 +1,7 @@
+# This is a cross-platform list tracking distribution packages needed by tests;
+# see http://docs.openstack.org/infra/bindep/ for additional information.
+
+mysql-client [test]
+mysql-server [test]
 libjpeg-dev [test]
 zookeeperd [platform:dpkg]
diff --git a/doc/source/connections.rst b/doc/source/connections.rst
index f0820a6..298100a 100644
--- a/doc/source/connections.rst
+++ b/doc/source/connections.rst
@@ -38,6 +38,9 @@
   Path to SSH key to use when logging into above server.
   ``sshkey=/home/zuul/.ssh/id_rsa``
 
+**keepalive**
+  Optional: Keepalive timeout, 0 means no keepalive.
+  ``keepalive=60``
 
 Gerrit Configuration
 ~~~~~~~~~~~~~~~~~~~~
@@ -77,3 +80,15 @@
   Who the report should be emailed to by default.
   This can be overridden by individual pipelines.
   ``default_to=you@example.com``
+
+SQL
+----
+
+  Only one connection per a database is permitted.
+
+  **driver=sql**
+
+  **dburi**
+    Database connection information in the form of a URI understood by
+    sqlalchemy. eg http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#database-urls
+    ``dburi=mysql://user:pass@localhost/db``
diff --git a/doc/source/launchers.rst b/doc/source/launchers.rst
index 8a8c932..c9dbd99 100644
--- a/doc/source/launchers.rst
+++ b/doc/source/launchers.rst
@@ -11,8 +11,6 @@
 .. _`Turbo-Hipster Documentation`:
    http://turbo-hipster.rtfd.org/
 
-.. _FormPost: http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.formpost
-
 .. _launchers:
 
 Launchers
@@ -138,34 +136,6 @@
 Your jobs can check whether the parameters are ``000000`` to act
 differently on each kind of event.
 
-Swift parameters
-~~~~~~~~~~~~~~~~
-
-If swift information has been configured for the job zuul will also
-provide signed credentials for the builder to upload results and
-assets into containers using the `FormPost`_ middleware.
-
-Each zuul container/instruction set will contain each of the following
-parameters where $NAME is the ``name`` defined in the layout.
-
-*SWIFT_$NAME_URL*
-  The swift destination URL. This will be the entire URL including
-  the AUTH, container and path prefix (folder).
-*SWIFT_$NAME_HMAC_BODY*
-  The information signed in the HMAC body. The body is as follows::
-
-    PATH TO OBJECT PREFIX (excluding domain)
-    BLANK LINE (zuul implements no form redirect)
-    MAX FILE SIZE
-    MAX FILE COUNT
-    SIGNATURE EXPIRY
-
-*SWIFT_$NAME_SIGNATURE*
-  The HMAC body signed with the configured key.
-*SWIFT_$NAME_LOGSERVER_PREFIX*
-  The URL to prepend to the object path when returning the results
-  from a build.
-
 Gearman
 -------
 
diff --git a/doc/source/reporters.rst b/doc/source/reporters.rst
index 97bed4a..b01c8d1 100644
--- a/doc/source/reporters.rst
+++ b/doc/source/reporters.rst
@@ -34,7 +34,7 @@
 A simple email reporter is also available.
 
 A :ref:`connection` that uses the smtp driver must be supplied to the
-trigger.
+reporter.
 
 SMTP Configuration
 ~~~~~~~~~~~~~~~~~~
@@ -60,3 +60,42 @@
           to: you@example.com
           from: alternative@example.com
           subject: Change {change} failed
+
+SQL
+---
+
+This reporter is used to store results in a database.
+
+A :ref:`connection` that uses the sql driver must be supplied to the
+reporter.
+
+SQL Configuration
+~~~~~~~~~~~~~~~~~
+
+zuul.conf contains the database connection and credentials. To store different
+reports in different databases you'll need to create a new connection per
+database.
+
+The sql reporter is used to store the results from individual builds rather
+than the change. As such the sql reporter does nothing on "start" or
+"merge-failure".
+
+**score**
+  A score to store for the result of the build. eg: -1 might indicate a failed
+  build similar to the vote posted back via the gerrit reporter.
+
+For example ::
+
+  pipelines:
+    - name: post-merge
+      manager: IndependentPipelineManager
+      source: my_gerrit
+      trigger:
+        my_gerrit:
+          - event: change-merged
+      success:
+        mydb_conn:
+            score: 1
+      failure:
+        mydb_conn:
+            score: -1
diff --git a/doc/source/zuul.rst b/doc/source/zuul.rst
index 8906dac..4f43596 100644
--- a/doc/source/zuul.rst
+++ b/doc/source/zuul.rst
@@ -172,76 +172,6 @@
   Path to PID lock file for the merger process.
   ``pidfile=/var/run/zuul-merger/merger.pid``
 
-.. _swift:
-
-swift
-"""""
-
-To send (optional) swift upload instructions this section must be
-present. Multiple destinations can be defined in the :ref:`jobs` section
-of the layout.
-
-If you are sending the temp-url-key or fetching the x-storage-url, you
-will need the python-swiftclient module installed.
-
-**X-Account-Meta-Temp-Url-Key** (optional)
-  This is the key used to sign the HMAC message. If you do not set a
-  key Zuul will generate one automatically.
-
-**Send-Temp-Url-Key** (optional)
-  Zuul can send the X-Account-Meta-Temp-Url-Key to swift for you if
-  you have set up the appropriate credentials in ``authurl`` below.
-  This isn't necessary if you know and have set your
-  X-Account-Meta-Temp-Url-Key.
-  If set, Zuul requires the python-swiftclient module.
-  ``default: true``
-
-**X-Storage-Url** (optional)
-  The storage URL is the destination to upload files into. If you do
-  not set this the ``authurl`` credentials are used to fetch the url
-  from swift and Zuul will requires the python-swiftclient module.
-
-**authurl** (optional)
-  The (keystone) Auth URL for swift.
-  ``For example, https://identity.api.rackspacecloud.com/v2.0/``
-  This is required if you have Send-Temp-Url-Key set to ``True`` or
-  if you have not supplied the X-Storage-Url.
-
-Any of the `swiftclient connection parameters`_ can also be defined
-here by the same name. Including the os_options by their key name (
-``for example tenant_id``)
-
-.. _swiftclient connection parameters: http://docs.openstack.org/developer/python-swiftclient/swiftclient.html#module-swiftclient.client
-
-**region_name** (optional)
-  The region name holding the swift container
-  ``For example, SYD``
-
-Each destination defined by the :ref:`jobs` will have the following
-default values that it may overwrite.
-
-**default_container** (optional)
-  Container name to place the log into
-  ``For example, logs``
-
-**default_expiry** (optional)
-  How long the signed destination should be available for
-  ``default: 7200 (2hrs)``
-
-**default_max_file_size** (optional)
-  The maximum size of an individual file
-  ``default: 104857600 (100MB)``
-
-**default_max_file_count** (optional)
-  The maximum number of separate files to allow
-  ``default: 10``
-
-**default_logserver_prefix**
-  Provide a URL to the CDN or logserver app so that a worker knows
-  what URL to return. The worker should return the logserver_prefix
-  url and the object path.
-  ``For example: http://logs.example.org/server.app?obj=``
-
 .. _connection:
 
 connection ArbitraryName
@@ -786,48 +716,6 @@
 **tags (optional)**
   A list of arbitrary strings which will be associated with the job.
 
-**swift**
-  If :ref:`swift` is configured then each job can define a destination
-  container for the builder to place logs and/or assets into. Multiple
-  containers can be listed for each job by providing a unique ``name``.
-
-  *name*
-    Set an identifying name for the container. This is used in the
-    parameter key sent to the builder. For example if it ``logs`` then
-    one of the parameters sent will be ``SWIFT_logs_CONTAINER``
-    (case-sensitive).
-
-  Each of the defaults defined in :ref:`swift` can be overwritten as:
-
-  *container* (optional)
-    Container name to place the log into
-    ``For example, logs``
-
-  *expiry* (optional)
-    How long the signed destination should be available for
-
-  *max-file-size** (optional)
-    The maximum size of an individual file
-
-  *max_file_size** (optional, deprecated)
-    A deprecated alternate spelling of *max-file-size*.
-
-  *max-file-count* (optional)
-    The maximum number of separate files to allow
-
-  *max_file_count* (optional, deprecated)
-    A deprecated alternate spelling of *max-file-count*.
-
-  *logserver-prefix*
-    Provide a URL to the CDN or logserver app so that a worker knows
-    what URL to return.
-    ``For example: http://logs.example.org/server.app?obj=``
-    The worker should return the logserver-prefix url and the object
-    path as the URL in the results data packet.
-
-  *logserver_prefix* (deprecated)
-    A deprecated alternate spelling of *logserver-prefix*.
-
 Here is an example of setting the failure message for jobs that check
 whether a change merges cleanly::
 
diff --git a/etc/status/public_html/jquery.zuul.js b/etc/status/public_html/jquery.zuul.js
index 9df44ce..d973948 100644
--- a/etc/status/public_html/jquery.zuul.js
+++ b/etc/status/public_html/jquery.zuul.js
@@ -148,11 +148,9 @@
                     case 'skipped':
                         $status.addClass('label-info');
                         break;
-                    case 'in progress':
-                    case 'queued':
-                    case 'lost':
+                    // 'in progress' 'queued' 'lost' 'aborted' ...
+                    default:
                         $status.addClass('label-default');
-                        break;
                 }
                 $status.text(result);
                 return $status;
diff --git a/etc/zuul.conf-sample b/etc/zuul.conf-sample
index 3de145a..7207c73 100644
--- a/etc/zuul.conf-sample
+++ b/etc/zuul.conf-sample
@@ -37,6 +37,7 @@
 ;baseurl=https://review.example.com/r
 user=jenkins
 sshkey=/home/jenkins/.ssh/id_rsa
+;keepalive=60
 
 [connection smtp]
 driver=smtp
@@ -44,3 +45,7 @@
 port=25
 default_from=zuul@example.com
 default_to=you@example.com
+
+[connection mydatabase]
+driver=sql
+dburi=mysql+pymysql://user@localhost/zuul
diff --git a/requirements.txt b/requirements.txt
index 4c5adc7..84d84be 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17,3 +17,5 @@
 six>=1.6.0
 ansible>=2.0.0.1
 kazoo
+sqlalchemy
+alembic
diff --git a/setup.cfg b/setup.cfg
index bd76d8b..972f261 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -31,3 +31,7 @@
 source-dir = doc/source
 build-dir = doc/build
 all_files = 1
+
+[extras]
+mysql_reporter=
+    PyMySQL
diff --git a/test-requirements.txt b/test-requirements.txt
index 150fd2e..b99c803 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6,8 +6,8 @@
 fixtures>=0.3.14
 python-keystoneclient>=0.4.2
 python-subunit
-python-swiftclient>=1.6
 testrepository>=0.0.17
 testtools>=0.9.32
 sphinxcontrib-programoutput
 mock
+PyMySQL
diff --git a/tests/base.py b/tests/base.py
index 1a66524..f26863e 100755
--- a/tests/base.py
+++ b/tests/base.py
@@ -32,17 +32,19 @@
 import socket
 import string
 import subprocess
-import swiftclient
 import sys
 import tempfile
 import threading
 import time
+import uuid
+
 
 import git
 import gear
 import fixtures
 import kazoo.client
 import kazoo.exceptions
+import pymysql
 import statsd
 import testtools
 import testtools.content
@@ -51,12 +53,12 @@
 
 import zuul.driver.gerrit.gerritsource as gerritsource
 import zuul.driver.gerrit.gerritconnection as gerritconnection
+import zuul.connection.sql
 import zuul.scheduler
 import zuul.webapp
 import zuul.rpclistener
 import zuul.launcher.server
 import zuul.launcher.client
-import zuul.lib.swift
 import zuul.lib.connections
 import zuul.merger.client
 import zuul.merger.merger
@@ -274,6 +276,25 @@
                  "eventCreatedOn": 1487613810}
         return event
 
+    def getRefUpdatedEvent(self):
+        path = os.path.join(self.upstream_root, self.project)
+        repo = git.Repo(path)
+        oldrev = repo.heads[self.branch].commit.hexsha
+
+        event = {
+            "type": "ref-updated",
+            "submitter": {
+                "name": "User Name",
+            },
+            "refUpdate": {
+                "oldRev": oldrev,
+                "newRev": self.patchsets[-1]['revision'],
+                "refName": self.branch,
+                "project": self.project,
+            }
+        }
+        return event
+
     def addApproval(self, category, value, username='reviewer_john',
                     granted_on=None, message=''):
         if not granted_on:
@@ -881,18 +902,6 @@
         return True
 
 
-class FakeSwiftClientConnection(swiftclient.client.Connection):
-    def post_account(self, headers):
-        # Do nothing
-        pass
-
-    def get_auth(self):
-        # Returns endpoint and (unused) auth token
-        endpoint = os.path.join('https://storage.example.org', 'V1',
-                                'AUTH_account')
-        return endpoint, ''
-
-
 class FakeNodepool(object):
     REQUEST_ROOT = '/nodepool/requests'
     NODE_ROOT = '/nodepool/nodes'
@@ -1068,6 +1077,43 @@
         _tmp_client.stop()
 
 
+class MySQLSchemaFixture(fixtures.Fixture):
+    def setUp(self):
+        super(MySQLSchemaFixture, self).setUp()
+
+        random_bits = ''.join(random.choice(string.ascii_lowercase +
+                                            string.ascii_uppercase)
+                              for x in range(8))
+        self.name = '%s_%s' % (random_bits, os.getpid())
+        self.passwd = uuid.uuid4().hex
+        db = pymysql.connect(host="localhost",
+                             user="openstack_citest",
+                             passwd="openstack_citest",
+                             db="openstack_citest")
+        cur = db.cursor()
+        cur.execute("create database %s" % self.name)
+        cur.execute(
+            "grant all on %s.* to '%s'@'localhost' identified by '%s'" %
+            (self.name, self.name, self.passwd))
+        cur.execute("flush privileges")
+
+        self.dburi = 'mysql+pymysql://%s:%s@localhost/%s' % (self.name,
+                                                             self.passwd,
+                                                             self.name)
+        self.addDetail('dburi', testtools.content.text_content(self.dburi))
+        self.addCleanup(self.cleanup)
+
+    def cleanup(self):
+        db = pymysql.connect(host="localhost",
+                             user="openstack_citest",
+                             passwd="openstack_citest",
+                             db="openstack_citest")
+        cur = db.cursor()
+        cur.execute("drop database %s" % self.name)
+        cur.execute("drop user '%s'@'localhost'" % self.name)
+        cur.execute("flush privileges")
+
+
 class BaseTestCase(testtools.TestCase):
     log = logging.getLogger("zuul.test")
     wait_timeout = 20
@@ -1272,11 +1318,6 @@
 
         self.sched = zuul.scheduler.Scheduler(self.config)
 
-        self.useFixture(fixtures.MonkeyPatch('swiftclient.client.Connection',
-                                             FakeSwiftClientConnection))
-
-        self.swift = zuul.lib.swift.Swift(self.config)
-
         self.event_queues = [
             self.sched.result_event_queue,
             self.sched.trigger_event_queue,
@@ -1307,7 +1348,7 @@
         self.builds = self.launch_server.running_builds
 
         self.launch_client = zuul.launcher.client.LaunchClient(
-            self.config, self.sched, self.swift)
+            self.config, self.sched)
         self.merge_client = zuul.merger.client.MergeClient(
             self.config, self.sched)
         self.nodepool = zuul.nodepool.Nodepool(self.sched)
@@ -1361,6 +1402,9 @@
             getGerritConnection))
 
         # Set up smtp related fakes
+        # TODO(jhesketh): This should come from lib.connections for better
+        # coverage
+        # Register connections from the config
         self.smtp_messages = []
 
         def FakeSMTPFactory(*args, **kw):
@@ -1872,3 +1916,20 @@
 class AnsibleZuulTestCase(ZuulTestCase):
     """ZuulTestCase but with an actual ansible launcher running"""
     run_ansible = True
+
+
+class ZuulDBTestCase(ZuulTestCase):
+    def setup_config(self, config_file='zuul-connections-same-gerrit.conf'):
+        super(ZuulDBTestCase, self).setup_config(config_file)
+        for section_name in self.config.sections():
+            con_match = re.match(r'^connection ([\'\"]?)(.*)(\1)$',
+                                 section_name, re.I)
+            if not con_match:
+                continue
+
+            if self.config.get(section_name, 'driver') == 'sql':
+                f = MySQLSchemaFixture()
+                self.useFixture(f)
+                if (self.config.get(section_name, 'dburi') ==
+                    '$MYSQL_FIXTURE_DBURI$'):
+                    self.config.set(section_name, 'dburi', f.dburi)
diff --git a/tests/fixtures/config/in-repo/git/common-config/playbooks/common-config-test.yaml b/tests/fixtures/config/in-repo/git/common-config/playbooks/common-config-test.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/in-repo/git/common-config/playbooks/common-config-test.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/in-repo/git/common-config/zuul.yaml b/tests/fixtures/config/in-repo/git/common-config/zuul.yaml
index 58b2051..d8b7200 100644
--- a/tests/fixtures/config/in-repo/git/common-config/zuul.yaml
+++ b/tests/fixtures/config/in-repo/git/common-config/zuul.yaml
@@ -35,3 +35,12 @@
       gerrit:
         verified: 0
     precedence: high
+
+- job:
+    name: common-config-test
+
+- project:
+    name: common-config
+    tenant-one-gate:
+      jobs:
+        - common-config-test
diff --git a/tests/fixtures/layout-cloner.yaml b/tests/fixtures/layout-cloner.yaml
index e840ed9..e8b5dde 100644
--- a/tests/fixtures/layout-cloner.yaml
+++ b/tests/fixtures/layout-cloner.yaml
@@ -1,4 +1,16 @@
 pipelines:
+  - name: check
+    manager: IndependentPipelineManager
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
   - name: gate
     manager: DependentPipelineManager
     failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
@@ -18,28 +30,54 @@
       gerrit:
         verified: -2
 
+  - name: post
+    manager: IndependentPipelineManager
+    trigger:
+      gerrit:
+        - event: ref-updated
+          ref: ^(?!refs/).*$
+
 projects:
+  - name: org/project
+    check:
+      - integration
+    gate:
+      - integration
 
   - name: org/project1
+    check:
+      - integration
     gate:
-        - integration
+      - integration
+    post:
+      - postjob
 
   - name: org/project2
+    check:
+      - integration
     gate:
-        - integration
+      - integration
 
   - name: org/project3
+    check:
+      - integration
     gate:
-        - integration
+      - integration
 
   - name: org/project4
+    check:
+      - integration
     gate:
-        - integration
+      - integration
 
   - name: org/project5
+    check:
+      - integration
     gate:
-        - integration
+      - integration
 
   - name: org/project6
+    check:
+      - integration
     gate:
-        - integration
+      - integration
diff --git a/tests/fixtures/layout-mutex-reconfiguration.yaml b/tests/fixtures/layout-mutex-reconfiguration.yaml
new file mode 100644
index 0000000..76cf1e9
--- /dev/null
+++ b/tests/fixtures/layout-mutex-reconfiguration.yaml
@@ -0,0 +1,23 @@
+pipelines:
+  - name: check
+    manager: IndependentPipelineManager
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+jobs:
+  - name: mutex-one
+    mutex: test-mutex
+  - name: mutex-two
+    mutex: test-mutex
+
+projects:
+  - name: org/project
+    check:
+      - project-test1
diff --git a/tests/fixtures/layout-sql-reporter.yaml b/tests/fixtures/layout-sql-reporter.yaml
new file mode 100644
index 0000000..c79a432
--- /dev/null
+++ b/tests/fixtures/layout-sql-reporter.yaml
@@ -0,0 +1,27 @@
+pipelines:
+  - name: check
+    manager: IndependentPipelineManager
+    source:
+        review_gerrit
+    trigger:
+      review_gerrit:
+        - event: patchset-created
+    success:
+      review_gerrit:
+        verified: 1
+      resultsdb:
+        score: 1
+    failure:
+      review_gerrit:
+        verified: -1
+      resultsdb:
+        score: -1
+      resultsdb_failures:
+        score: -1
+
+projects:
+  - name: org/project
+    check:
+      - project-merge:
+        - project-test1
+        - project-test2
diff --git a/tests/fixtures/layout-swift.yaml b/tests/fixtures/layout-swift.yaml
deleted file mode 100644
index acaaad8..0000000
--- a/tests/fixtures/layout-swift.yaml
+++ /dev/null
@@ -1,59 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: patchset-created
-    success:
-      gerrit:
-        verified: 1
-    failure:
-      gerrit:
-        verified: -1
-
-  - name: post
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        - event: ref-updated
-          ref: ^(?!refs/).*$
-
-  - name: gate
-    manager: DependentPipelineManager
-    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
-    trigger:
-      gerrit:
-        - event: comment-added
-          approval:
-            - approved: 1
-    success:
-      gerrit:
-        verified: 2
-        submit: true
-    failure:
-      gerrit:
-        verified: -2
-    start:
-      gerrit:
-        verified: 0
-    precedence: high
-
-jobs:
-  - name: ^.*$
-    swift:
-      - name: logs
-  - name: ^.*-merge$
-    swift:
-      - name: logs
-        container: merge_logs
-    failure-message: Unable to merge change
-  - name: test-test
-    swift:
-      - name: MOSTLY
-        container: stash
-
-projects:
-  - name: org/project
-    gate:
-      - test-merge
-      - test-test
diff --git a/tests/fixtures/layouts/bad_gerrit_missing.yaml b/tests/fixtures/layouts/bad_gerrit_missing.yaml
deleted file mode 100644
index 8db7248..0000000
--- a/tests/fixtures/layouts/bad_gerrit_missing.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      not_gerrit:
-        - event: patchset-created
-    success:
-      review_gerrit:
-        verified: 1
-    failure:
-      review_gerrit:
-        verified: -1
-
-projects:
-  - name: test-org/test
-    check:
-      - test-merge
-      - test-test
diff --git a/tests/fixtures/layouts/bad_merge_failure.yaml b/tests/fixtures/layouts/bad_merge_failure.yaml
deleted file mode 100644
index d9b973c..0000000
--- a/tests/fixtures/layouts/bad_merge_failure.yaml
+++ /dev/null
@@ -1,40 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-    success:
-      review_gerrit:
-        verified: 1
-    failure:
-      review_gerrit:
-        verified: -1
-    # merge-failure-message needs a string.
-    merge-failure-message:
-
-  - name: gate
-    manager: DependentPipelineManager
-    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
-    trigger:
-      review_gerrit:
-        - event: comment-added
-          approval:
-            - approved: 1
-    success:
-      review_gerrit:
-        verified: 2
-        submit: true
-    failure:
-      review_gerrit:
-        verified: -2
-    merge-failure:
-    start:
-      review_gerrit:
-        verified: 0
-    precedence: high
-
-projects:
-  - name: org/project
-    check:
-      - project-check
diff --git a/tests/fixtures/layouts/bad_misplaced_ref.yaml b/tests/fixtures/layouts/bad_misplaced_ref.yaml
deleted file mode 100644
index d8bb6bc..0000000
--- a/tests/fixtures/layouts/bad_misplaced_ref.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-pipelines:
-  - name: 'check'
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-          ref: /some/ref/path
-
-projects:
-  - name: org/project
-    merge-mode: cherry-pick
-    check:
-      - project-check
diff --git a/tests/fixtures/layouts/bad_pipelines1.yaml b/tests/fixtures/layouts/bad_pipelines1.yaml
deleted file mode 100644
index 09638bc..0000000
--- a/tests/fixtures/layouts/bad_pipelines1.yaml
+++ /dev/null
@@ -1,2 +0,0 @@
-# Pipelines completely missing. At least one is required.
-pipelines:
diff --git a/tests/fixtures/layouts/bad_pipelines10.yaml b/tests/fixtures/layouts/bad_pipelines10.yaml
deleted file mode 100644
index ddde946..0000000
--- a/tests/fixtures/layouts/bad_pipelines10.yaml
+++ /dev/null
@@ -1,8 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-
-projects:
-  - name: foo
-    # merge-mode must be one of merge, merge-resolve, cherry-pick.
-    merge-mode: foo
diff --git a/tests/fixtures/layouts/bad_pipelines2.yaml b/tests/fixtures/layouts/bad_pipelines2.yaml
deleted file mode 100644
index fc1e154..0000000
--- a/tests/fixtures/layouts/bad_pipelines2.yaml
+++ /dev/null
@@ -1,7 +0,0 @@
-pipelines:
-  # name is required for pipelines
-  - noname: check
-    manager: IndependentPipelineManager
-
-projects:
-  - name: foo
diff --git a/tests/fixtures/layouts/bad_pipelines3.yaml b/tests/fixtures/layouts/bad_pipelines3.yaml
deleted file mode 100644
index 93ac266..0000000
--- a/tests/fixtures/layouts/bad_pipelines3.yaml
+++ /dev/null
@@ -1,8 +0,0 @@
-pipelines:
-  - name: check
-    # The manager must be one of IndependentPipelineManager
-    # or DependentPipelineManager
-    manager: NonexistentPipelineManager
-
-projects:
-  - name: foo
diff --git a/tests/fixtures/layouts/bad_pipelines4.yaml b/tests/fixtures/layouts/bad_pipelines4.yaml
deleted file mode 100644
index 3a91604..0000000
--- a/tests/fixtures/layouts/bad_pipelines4.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      gerrit:
-        # non-event is not a valid gerrit event
-        - event: non-event
-
-projects:
-  - name: foo
diff --git a/tests/fixtures/layouts/bad_pipelines5.yaml b/tests/fixtures/layouts/bad_pipelines5.yaml
deleted file mode 100644
index a91ac7a..0000000
--- a/tests/fixtures/layouts/bad_pipelines5.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        # event is a required item but it is missing.
-        - approval:
-            - approved: 1
-
-projects:
-  - name: foo
diff --git a/tests/fixtures/layouts/bad_pipelines6.yaml b/tests/fixtures/layouts/bad_pipelines6.yaml
deleted file mode 100644
index bf2d538..0000000
--- a/tests/fixtures/layouts/bad_pipelines6.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: comment-added
-          # approved is not a valid entry. Should be approval.
-          approved: 1
-
-projects:
-  - name: foo
diff --git a/tests/fixtures/layouts/bad_pipelines7.yaml b/tests/fixtures/layouts/bad_pipelines7.yaml
deleted file mode 100644
index e2db495..0000000
--- a/tests/fixtures/layouts/bad_pipelines7.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-pipelines:
-  # The pipeline must have a name.
-  - manager: IndependentPipelineManager
-
-projects:
-  - name: foo
diff --git a/tests/fixtures/layouts/bad_pipelines8.yaml b/tests/fixtures/layouts/bad_pipelines8.yaml
deleted file mode 100644
index 9c5918e..0000000
--- a/tests/fixtures/layouts/bad_pipelines8.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-pipelines:
-  # The pipeline must have a manager
-  - name: check
-
-projects:
-  - name: foo
diff --git a/tests/fixtures/layouts/bad_pipelines9.yaml b/tests/fixtures/layouts/bad_pipelines9.yaml
deleted file mode 100644
index 89307d5..0000000
--- a/tests/fixtures/layouts/bad_pipelines9.yaml
+++ /dev/null
@@ -1,9 +0,0 @@
-pipelines:
-  # Names must be unique.
-  - name: check
-    manager: IndependentPipelineManager
-  - name: check
-    manager: IndependentPipelineManager
-
-projects:
-  - name: foo
diff --git a/tests/fixtures/layouts/bad_projects1.yaml b/tests/fixtures/layouts/bad_projects1.yaml
deleted file mode 100644
index e3d381f..0000000
--- a/tests/fixtures/layouts/bad_projects1.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-
-projects:
-  - name: foo
-  # gate pipeline is not defined.
-    gate:
-      - test
-
diff --git a/tests/fixtures/layouts/bad_projects2.yaml b/tests/fixtures/layouts/bad_projects2.yaml
deleted file mode 100644
index 9291cc9..0000000
--- a/tests/fixtures/layouts/bad_projects2.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-
-projects:
-  - name: foo
-    check:
-      # Indentation is one level too deep on the last line.
-      - test
-        - foo
diff --git a/tests/fixtures/layouts/bad_reject.yaml b/tests/fixtures/layouts/bad_reject.yaml
deleted file mode 100644
index 0549875..0000000
--- a/tests/fixtures/layouts/bad_reject.yaml
+++ /dev/null
@@ -1,21 +0,0 @@
-# Template is going to be called but missing a parameter
-
-pipelines:
-  - name: 'check'
-    manager: IndependentPipelineManager
-    require:
-      open: True
-      current-patchset: True
-      approval:
-        - verified: [1, 2]
-          username: jenkins
-        - workflow: 1
-    reject:
-      # Reject only takes 'approval', has no need for open etc..
-      open: True
-      approval:
-        - code-review: [-1, -2]
-          username: core-person
-    trigger:
-      review_gerrit:
-        - event: patchset-created
diff --git a/tests/fixtures/layouts/bad_swift.yaml b/tests/fixtures/layouts/bad_swift.yaml
deleted file mode 100644
index f217821..0000000
--- a/tests/fixtures/layouts/bad_swift.yaml
+++ /dev/null
@@ -1,28 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-    success:
-      review_gerrit:
-        verified: 1
-    failure:
-      review_gerrit:
-        verified: -1
-
-jobs:
-  - name: ^.*$
-    swift:
-      - name: logs
-  - name: ^.*-merge$
-    # swift requires a name
-    swift:
-        container: merge_assets
-    failure-message: Unable to merge change
-
-projects:
-  - name: test-org/test
-    check:
-      - test-merge
-      - test-test
diff --git a/tests/fixtures/layouts/bad_template1.yaml b/tests/fixtures/layouts/bad_template1.yaml
deleted file mode 100644
index 8868edf..0000000
--- a/tests/fixtures/layouts/bad_template1.yaml
+++ /dev/null
@@ -1,20 +0,0 @@
-# Template is going to be called but missing a parameter
-
-pipelines:
-  - name: 'check'
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-
-project-templates:
-  - name: template-generic
-    check:
-     # Template uses the 'project' parameter' which must be provided
-     - '{project}-merge'
-
-projects:
-  - name: organization/project
-    template:
-      - name: template-generic
-      # Here we 'forgot' to pass 'project'
diff --git a/tests/fixtures/layouts/bad_template2.yaml b/tests/fixtures/layouts/bad_template2.yaml
deleted file mode 100644
index 09a5f91..0000000
--- a/tests/fixtures/layouts/bad_template2.yaml
+++ /dev/null
@@ -1,23 +0,0 @@
-# Template is going to be called with an extra parameter
-
-pipelines:
-  - name: 'check'
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-
-project-templates:
-  - name: template-generic
-    check:
-     # Template only uses the 'project' parameter'
-     - '{project}-merge'
-
-projects:
-  - name: organization/project
-    template:
-      - name: template-generic
-        project: 'MyProjectName'
-        # Feed an extra parameters which is not going to be used
-        # by the template.  That is an error.
-        extraparam: 'IShouldNotBeSet'
diff --git a/tests/fixtures/layouts/bad_template3.yaml b/tests/fixtures/layouts/bad_template3.yaml
deleted file mode 100644
index 54697c4..0000000
--- a/tests/fixtures/layouts/bad_template3.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-# Template refers to an unexisting pipeline
-
-project-templates:
-  - name: template-generic
-    unexisting-pipeline:  # pipeline does not exist
-
-projects:
-  - name: organization/project
-    template:
-      - name: template-generic
diff --git a/tests/fixtures/layouts/good_connections1.yaml b/tests/fixtures/layouts/good_connections1.yaml
deleted file mode 100644
index f5f55b1..0000000
--- a/tests/fixtures/layouts/good_connections1.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    source: review_gerrit
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-    success:
-      review_gerrit:
-        verified: 1
-    failure:
-      other_gerrit:
-        verified: -1
-
-projects:
-  - name: org/project
-    check:
-      - project-check
diff --git a/tests/fixtures/layouts/good_layout.yaml b/tests/fixtures/layouts/good_layout.yaml
deleted file mode 100644
index 0e21d57..0000000
--- a/tests/fixtures/layouts/good_layout.yaml
+++ /dev/null
@@ -1,102 +0,0 @@
-includes:
-  - python-file: openstack_functions.py
-
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    require:
-      open: True
-      current-patchset: True
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-        - event: comment-added
-          require-approval:
-            - verified: [-1, -2]
-              username: jenkins
-          approval:
-            - workflow: 1
-    success:
-      review_gerrit:
-        verified: 1
-    failure:
-      review_gerrit:
-        verified: -1
-
-  - name: post
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: ref-updated
-          ref: ^(?!refs/).*$
-          ignore-deletes: True
-
-  - name: gate
-    manager: DependentPipelineManager
-    success-message: Your change is awesome.
-    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
-    require:
-      open: True
-      current-patchset: True
-      approval:
-        - verified: [1, 2]
-          username: jenkins
-        - workflow: 1
-    reject:
-      approval:
-        - code-review: [-1, -2]
-    trigger:
-      review_gerrit:
-        - event: comment-added
-          approval:
-            - approved: 1
-    start:
-      review_gerrit:
-        verified: 0
-    success:
-      review_gerrit:
-        verified: 2
-        code-review: 1
-        submit: true
-    failure:
-      review_gerrit:
-        verified: -2
-        workinprogress: true
-
-  - name: merge-check
-    manager: IndependentPipelineManager
-    source: review_gerrit
-    ignore-dependencies: true
-    trigger:
-      zuul:
-        - event: project-change-merged
-    merge-failure:
-      review_gerrit:
-        verified: -1
-
-jobs:
-  - name: ^.*-merge$
-    failure-message: Unable to merge change
-    hold-following-changes: true
-  - name: test-merge
-    parameter-function: devstack_params
-  - name: test-test
-  - name: test-merge2
-    success-pattern: http://logs.example.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}/{build.number}/success
-    failure-pattern: http://logs.example.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}/{build.number}/fail
-  - name: project-testfile
-    files:
-      - 'tools/.*-requires'
-
-projects:
-  - name: test-org/test
-    merge-mode: cherry-pick
-    check:
-      - test-merge2:
-          - test-thing1:
-              - test-thing2
-              - test-thing3
-    gate:
-      - test-thing
-    post:
-      - test-post
diff --git a/tests/fixtures/layouts/good_merge_failure.yaml b/tests/fixtures/layouts/good_merge_failure.yaml
deleted file mode 100644
index afede3c..0000000
--- a/tests/fixtures/layouts/good_merge_failure.yaml
+++ /dev/null
@@ -1,53 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    merge-failure-message: "Could not merge the change. Please rebase..."
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-    success:
-      review_gerrit:
-        verified: 1
-    failure:
-      review_gerrit:
-        verified: -1
-
-  - name: post
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: ref-updated
-          ref: ^(?!refs/).*$
-    merge-failure:
-      review_gerrit:
-        verified: -1
-
-  - name: gate
-    manager: DependentPipelineManager
-    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
-    trigger:
-      review_gerrit:
-        - event: comment-added
-          approval:
-            - approved: 1
-    success:
-      review_gerrit:
-        verified: 2
-        submit: true
-    failure:
-      review_gerrit:
-        verified: -2
-    merge-failure:
-      review_gerrit:
-        verified: -1
-      my_smtp:
-        to: you@example.com
-    start:
-      review_gerrit:
-        verified: 0
-    precedence: high
-
-projects:
-  - name: org/project
-    check:
-      - project-check
diff --git a/tests/fixtures/layouts/good_require_approvals.yaml b/tests/fixtures/layouts/good_require_approvals.yaml
deleted file mode 100644
index d899765..0000000
--- a/tests/fixtures/layouts/good_require_approvals.yaml
+++ /dev/null
@@ -1,36 +0,0 @@
-includes:
-  - python-file: custom_functions.py
-
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: comment-added
-          require-approval:
-            - username: jenkins
-              older-than: 48h
-        - event: comment-added
-          require-approval:
-            - email: jenkins@example.com
-              newer-than: 48h
-        - event: comment-added
-          require-approval:
-            - approved: 1
-        - event: comment-added
-          require-approval:
-            - approved: 1
-              username: jenkins
-              email: jenkins@example.com
-    success:
-      review_gerrit:
-        verified: 1
-    failure:
-      review_gerrit:
-        verified: -1
-
-projects:
-  - name: org/project
-    merge-mode: cherry-pick
-    check:
-      - project-check
diff --git a/tests/fixtures/layouts/good_swift.yaml b/tests/fixtures/layouts/good_swift.yaml
deleted file mode 100644
index 48ca7f0..0000000
--- a/tests/fixtures/layouts/good_swift.yaml
+++ /dev/null
@@ -1,32 +0,0 @@
-pipelines:
-  - name: check
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-    success:
-      review_gerrit:
-        verified: 1
-    failure:
-      review_gerrit:
-        verified: -1
-
-jobs:
-  - name: ^.*$
-    swift:
-      - name: logs
-  - name: ^.*-merge$
-    swift:
-      - name: assets
-        container: merge_assets
-    failure-message: Unable to merge change
-  - name: test-test
-    swift:
-      - name: mostly
-        container: stash
-
-projects:
-  - name: test-org/test
-    check:
-      - test-merge
-      - test-test
diff --git a/tests/fixtures/layouts/good_template1.yaml b/tests/fixtures/layouts/good_template1.yaml
deleted file mode 100644
index 1680c7b..0000000
--- a/tests/fixtures/layouts/good_template1.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-pipelines:
-  - name: 'check'
-    manager: IndependentPipelineManager
-    trigger:
-      review_gerrit:
-        - event: patchset-created
-
-project-templates:
-  - name: template-generic
-    check:
-     - '{project}-merge'
-
-projects:
-  - name: organization/project
-    template:
-      - name: template-generic
-        project: 'myproject'
diff --git a/tests/fixtures/layouts/zuul_default.conf b/tests/fixtures/layouts/zuul_default.conf
deleted file mode 100644
index 6440027..0000000
--- a/tests/fixtures/layouts/zuul_default.conf
+++ /dev/null
@@ -1,36 +0,0 @@
-[gearman]
-server=127.0.0.1
-
-[zuul]
-layout_config=layout.yaml
-url_pattern=http://logs.example.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}/{build.number}
-job_name_in_report=true
-
-[merger]
-git_dir=/tmp/zuul-test/git
-git_user_email=zuul@example.com
-git_user_name=zuul
-zuul_url=http://zuul.example.com/p
-
-[swift]
-authurl=https://identity.api.example.org/v2.0/
-user=username
-key=password
-tenant_name=" "
-
-default_container=logs
-region_name=EXP
-logserver_prefix=http://logs.example.org/server.app/
-
-[connection review_gerrit]
-driver=gerrit
-server=review.example.com
-user=jenkins
-sshkey=none
-
-[connection my_smtp]
-driver=smtp
-server=localhost
-port=25
-default_from=zuul@example.com
-default_to=you@example.com
diff --git a/tests/fixtures/layouts/good_connections1.conf b/tests/fixtures/zuul-connections-bad-sql.conf
similarity index 61%
rename from tests/fixtures/layouts/good_connections1.conf
rename to tests/fixtures/zuul-connections-bad-sql.conf
index 768cbb0..2d1e804 100644
--- a/tests/fixtures/layouts/good_connections1.conf
+++ b/tests/fixtures/zuul-connections-bad-sql.conf
@@ -2,7 +2,7 @@
 server=127.0.0.1
 
 [zuul]
-layout_config=layout.yaml
+layout_config=layout-connections-multiple-voters.yaml
 url_pattern=http://logs.example.com/{change.number}/{change.patchset}/{pipeline.name}/{job.name}/{build.number}
 job_name_in_report=true
 
@@ -12,31 +12,29 @@
 git_user_name=zuul
 zuul_url=http://zuul.example.com/p
 
-[swift]
-authurl=https://identity.api.example.org/v2.0/
-user=username
-key=password
-tenant_name=" "
-
-default_container=logs
-region_name=EXP
-logserver_prefix=http://logs.example.org/server.app/
-
 [connection review_gerrit]
 driver=gerrit
 server=review.example.com
 user=jenkins
 sshkey=none
 
-[connection other_gerrit]
+[connection alt_voting_gerrit]
 driver=gerrit
-server=review2.example.com
-user=jenkins2
+server=alt_review.example.com
+user=civoter
 sshkey=none
 
-[connection my_smtp]
+[connection outgoing_smtp]
 driver=smtp
 server=localhost
 port=25
 default_from=zuul@example.com
 default_to=you@example.com
+
+[connection resultsdb]
+driver=sql
+dburi=mysql+pymysql://bad:creds@host/db
+
+[connection resultsdb_failures]
+driver=sql
+dburi=mysql+pymysql://bad:creds@host/db
diff --git a/tests/fixtures/zuul-connections-multiple-gerrits.conf b/tests/fixtures/zuul-connections-multiple-gerrits.conf
index 3e6850d..c1a335d 100644
--- a/tests/fixtures/zuul-connections-multiple-gerrits.conf
+++ b/tests/fixtures/zuul-connections-multiple-gerrits.conf
@@ -15,16 +15,6 @@
 [launcher]
 git_dir=/tmp/zuul-test/launcher-git
 
-[swift]
-authurl=https://identity.api.example.org/v2.0/
-user=username
-key=password
-tenant_name=" "
-
-default_container=logs
-region_name=EXP
-logserver_prefix=http://logs.example.org/server.app/
-
 [connection review_gerrit]
 driver=gerrit
 server=review.example.com
diff --git a/tests/fixtures/zuul-connections-same-gerrit.conf b/tests/fixtures/zuul-connections-same-gerrit.conf
index 57b5182..5c10444 100644
--- a/tests/fixtures/zuul-connections-same-gerrit.conf
+++ b/tests/fixtures/zuul-connections-same-gerrit.conf
@@ -15,27 +15,17 @@
 [launcher]
 git_dir=/tmp/zuul-test/launcher-git
 
-[swift]
-authurl=https://identity.api.example.org/v2.0/
-user=username
-key=password
-tenant_name=" "
-
-default_container=logs
-region_name=EXP
-logserver_prefix=http://logs.example.org/server.app/
-
 [connection review_gerrit]
 driver=gerrit
 server=review.example.com
 user=jenkins
-sshkey=none
+sshkey=fake_id_rsa1
 
 [connection alt_voting_gerrit]
 driver=gerrit
 server=review.example.com
 user=civoter
-sshkey=none
+sshkey=fake_id_rsa2
 
 [connection outgoing_smtp]
 driver=smtp
@@ -43,3 +33,12 @@
 port=25
 default_from=zuul@example.com
 default_to=you@example.com
+
+# TODOv3(jeblair): commented out until sqlalchemy conenction ported to
+# v3 driver syntax
+#[connection resultsdb] driver=sql
+#dburi=$MYSQL_FIXTURE_DBURI$
+
+#[connection resultsdb_failures]
+#driver=sql
+#dburi=$MYSQL_FIXTURE_DBURI$
diff --git a/tests/fixtures/zuul.conf b/tests/fixtures/zuul.conf
index 48129d8..f0b6068 100644
--- a/tests/fixtures/zuul.conf
+++ b/tests/fixtures/zuul.conf
@@ -29,7 +29,7 @@
 driver=gerrit
 server=review.example.com
 user=jenkins
-sshkey=none
+sshkey=fake_id_rsa_path
 
 [connection smtp]
 driver=smtp
diff --git a/tests/make_playbooks.py b/tests/make_playbooks.py
index 12d9e71..33d45ca 100755
--- a/tests/make_playbooks.py
+++ b/tests/make_playbooks.py
@@ -39,7 +39,7 @@
         if os.path.exists(os.path.join(path, fn)):
             config_path = os.path.join(path, fn)
             break
-    config = yaml.load(open(config_path))
+    config = yaml.safe_load(open(config_path))
     for block in config:
         if 'job' not in block:
             continue
diff --git a/tests/unit/test_cloner.py b/tests/unit/test_cloner.py
index 02ae910..da0f774 100644
--- a/tests/unit/test_cloner.py
+++ b/tests/unit/test_cloner.py
@@ -89,6 +89,7 @@
                 git_base_url=self.upstream_root,
                 projects=projects,
                 workspace=self.workspace_root,
+                zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
                 zuul_url=self.src_root,
@@ -105,11 +106,34 @@
                                   'be correct' % (project, number))
 
         work = self.getWorkspaceRepos(projects)
-        upstream_repo_path = os.path.join(self.upstream_root, 'org/project1')
-        self.assertEquals(
+        # project1 is the zuul_project so the origin should be set to the
+        # zuul_url since that is the most up to date.
+        cache_repo_path = os.path.join(cache_root, 'org/project1')
+        self.assertNotEqual(
             work['org/project1'].remotes.origin.url,
+            cache_repo_path,
+            'workspace repo origin should not be the cache'
+        )
+        zuul_url_repo_path = os.path.join(self.git_root, 'org/project1')
+        self.assertEqual(
+            work['org/project1'].remotes.origin.url,
+            zuul_url_repo_path,
+            'workspace repo origin should be the zuul url'
+        )
+
+        # project2 is not the zuul_project so the origin should be set
+        # to upstream since that is the best we can do
+        cache_repo_path = os.path.join(cache_root, 'org/project2')
+        self.assertNotEqual(
+            work['org/project2'].remotes.origin.url,
+            cache_repo_path,
+            'workspace repo origin should not be the cache'
+        )
+        upstream_repo_path = os.path.join(self.upstream_root, 'org/project2')
+        self.assertEqual(
+            work['org/project2'].remotes.origin.url,
             upstream_repo_path,
-            'workspace repo origin should be upstream, not cache'
+            'workspace repo origin should be the upstream url'
         )
 
         self.worker.hold_jobs_in_build = False
@@ -147,6 +171,7 @@
                 git_base_url=self.upstream_root,
                 projects=projects,
                 workspace=self.workspace_root,
+                zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
                 zuul_url=self.src_root,
@@ -217,6 +242,7 @@
                 git_base_url=self.upstream_root,
                 projects=projects,
                 workspace=self.workspace_root,
+                zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
                 zuul_url=self.src_root,
@@ -331,6 +357,7 @@
                 git_base_url=self.upstream_root,
                 projects=projects,
                 workspace=self.workspace_root,
+                zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
                 zuul_url=self.src_root,
@@ -393,6 +420,7 @@
                 git_base_url=self.upstream_root,
                 projects=projects,
                 workspace=self.workspace_root,
+                zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
                 zuul_url=self.src_root,
@@ -479,6 +507,7 @@
                 git_base_url=self.upstream_root,
                 projects=projects,
                 workspace=self.workspace_root,
+                zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters['ZUUL_BRANCH'],
                 zuul_ref=build.parameters['ZUUL_REF'],
                 zuul_url=self.src_root,
@@ -544,6 +573,7 @@
                 git_base_url=self.upstream_root,
                 projects=projects,
                 workspace=self.workspace_root,
+                zuul_project=build.parameters.get('ZUUL_PROJECT', None),
                 zuul_branch=build.parameters.get('ZUUL_BRANCH', None),
                 zuul_ref=build.parameters.get('ZUUL_REF', None),
                 zuul_url=self.src_root,
@@ -565,56 +595,158 @@
         self.worker.release()
         self.waitUntilSettled()
 
+    def test_periodic_update(self):
+        # Test that the merger correctly updates its local repository
+        # before running a periodic job.
+
+        # Prime the merger with the current state
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        # Merge a different change
+        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
+        B.setMerged()
+
+        # Start a periodic job
+        self.worker.hold_jobs_in_build = True
+        self.launcher.negative_function_cache_ttl = 0
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-timer.yaml')
+        self.sched.reconfigure(self.config)
+        self.registerJobs()
+
+        # The pipeline triggers every second, so we should have seen
+        # several by now.
+        time.sleep(5)
+        self.waitUntilSettled()
+
+        builds = self.builds[:]
+
+        self.worker.hold_jobs_in_build = False
+        # Stop queuing timer triggered jobs so that the assertions
+        # below don't race against more jobs being queued.
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-no-timer.yaml')
+        self.sched.reconfigure(self.config)
+        self.registerJobs()
+        self.worker.release()
+        self.waitUntilSettled()
+
+        projects = ['org/project']
+
+        self.assertEquals(2, len(builds), "Two builds are running")
+
+        upstream = self.getUpstreamRepos(projects)
+        self.assertEqual(upstream['org/project'].commit('master').hexsha,
+                         B.patchsets[0]['revision'])
+        states = [
+            {'org/project':
+                str(upstream['org/project'].commit('master')),
+             },
+            {'org/project':
+                str(upstream['org/project'].commit('master')),
+             },
+        ]
+
+        for number, build in enumerate(builds):
+            self.log.debug("Build parameters: %s", build.parameters)
+            cloner = zuul.lib.cloner.Cloner(
+                git_base_url=self.upstream_root,
+                projects=projects,
+                workspace=self.workspace_root,
+                zuul_project=build.parameters.get('ZUUL_PROJECT', None),
+                zuul_branch=build.parameters.get('ZUUL_BRANCH', None),
+                zuul_ref=build.parameters.get('ZUUL_REF', None),
+                zuul_url=self.git_root,
+            )
+            cloner.execute()
+            work = self.getWorkspaceRepos(projects)
+            state = states[number]
+
+            for project in projects:
+                self.assertEquals(state[project],
+                                  str(work[project].commit('HEAD')),
+                                  'Project %s commit for build %s should '
+                                  'be correct' % (project, number))
+
+            shutil.rmtree(self.workspace_root)
+
+        self.worker.hold_jobs_in_build = False
+        self.worker.release()
+        self.waitUntilSettled()
+
     def test_post_checkout(self):
-        project = "org/project"
-        path = os.path.join(self.upstream_root, project)
-        repo = git.Repo(path)
-        repo.head.reference = repo.heads['master']
-        commits = []
-        for i in range(0, 3):
-            commits.append(self.create_commit(project))
-        newRev = commits[1]
+        self.worker.hold_jobs_in_build = True
+        project = "org/project1"
+
+        A = self.fake_gerrit.addFakeChange(project, 'master', 'A')
+        event = A.getRefUpdatedEvent()
+        A.setMerged()
+        self.fake_gerrit.addEvent(event)
+        self.waitUntilSettled()
+
+        build = self.builds[0]
+        state = {'org/project1': build.parameters['ZUUL_COMMIT']}
+
+        build.release()
+        self.waitUntilSettled()
 
         cloner = zuul.lib.cloner.Cloner(
             git_base_url=self.upstream_root,
             projects=[project],
             workspace=self.workspace_root,
-            zuul_branch=None,
-            zuul_ref='master',
-            zuul_url=self.src_root,
-            zuul_project=project,
-            zuul_newrev=newRev,
+            zuul_project=build.parameters.get('ZUUL_PROJECT', None),
+            zuul_branch=build.parameters.get('ZUUL_BRANCH', None),
+            zuul_ref=build.parameters.get('ZUUL_REF', None),
+            zuul_newrev=build.parameters.get('ZUUL_NEWREV', None),
+            zuul_url=self.git_root,
         )
         cloner.execute()
-        repos = self.getWorkspaceRepos([project])
-        cloned_sha = repos[project].rev_parse('HEAD').hexsha
-        self.assertEqual(newRev, cloned_sha)
+        work = self.getWorkspaceRepos([project])
+        self.assertEquals(state[project],
+                          str(work[project].commit('HEAD')),
+                          'Project %s commit for build %s should '
+                          'be correct' % (project, 0))
+        shutil.rmtree(self.workspace_root)
 
     def test_post_and_master_checkout(self):
-        project = "org/project1"
-        master_project = "org/project2"
-        path = os.path.join(self.upstream_root, project)
-        repo = git.Repo(path)
-        repo.head.reference = repo.heads['master']
-        commits = []
-        for i in range(0, 3):
-            commits.append(self.create_commit(project))
-        newRev = commits[1]
+        self.worker.hold_jobs_in_build = True
+        projects = ["org/project1", "org/project2"]
+
+        A = self.fake_gerrit.addFakeChange(projects[0], 'master', 'A')
+        event = A.getRefUpdatedEvent()
+        A.setMerged()
+        self.fake_gerrit.addEvent(event)
+        self.waitUntilSettled()
+
+        build = self.builds[0]
+        upstream = self.getUpstreamRepos(projects)
+        state = {'org/project1':
+                 build.parameters['ZUUL_COMMIT'],
+                 'org/project2':
+                 str(upstream['org/project2'].commit('master')),
+                 }
+
+        build.release()
+        self.waitUntilSettled()
 
         cloner = zuul.lib.cloner.Cloner(
             git_base_url=self.upstream_root,
-            projects=[project, master_project],
+            projects=projects,
             workspace=self.workspace_root,
-            zuul_branch=None,
-            zuul_ref='master',
-            zuul_url=self.src_root,
-            zuul_project=project,
-            zuul_newrev=newRev
+            zuul_project=build.parameters.get('ZUUL_PROJECT', None),
+            zuul_branch=build.parameters.get('ZUUL_BRANCH', None),
+            zuul_ref=build.parameters.get('ZUUL_REF', None),
+            zuul_newrev=build.parameters.get('ZUUL_NEWREV', None),
+            zuul_url=self.git_root,
         )
         cloner.execute()
-        repos = self.getWorkspaceRepos([project, master_project])
-        cloned_sha = repos[project].rev_parse('HEAD').hexsha
-        self.assertEqual(newRev, cloned_sha)
-        self.assertEqual(
-            repos[master_project].rev_parse('HEAD').hexsha,
-            repos[master_project].rev_parse('master').hexsha)
+        work = self.getWorkspaceRepos(projects)
+
+        for project in projects:
+            self.assertEquals(state[project],
+                              str(work[project].commit('HEAD')),
+                              'Project %s commit for build %s should '
+                              'be correct' % (project, 0))
+        shutil.rmtree(self.workspace_root)
diff --git a/tests/unit/test_connection.py b/tests/unit/test_connection.py
index d9bc72f..8954832 100644
--- a/tests/unit/test_connection.py
+++ b/tests/unit/test_connection.py
@@ -12,14 +12,26 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
-from tests.base import ZuulTestCase
+import sqlalchemy as sa
+from unittest import skip
+
+from tests.base import ZuulTestCase, ZuulDBTestCase
+
+
+def _get_reporter_from_connection_name(reporters, connection_name):
+    # Reporters are placed into lists for each action they may exist in.
+    # Search through the given list for the correct reporter by its conncetion
+    # name
+    for r in reporters:
+        if r.connection.connection_name == connection_name:
+            return r
 
 
 class TestConnections(ZuulTestCase):
     config_file = 'zuul-connections-same-gerrit.conf'
     tenant_config_file = 'config/zuul-connections-same-gerrit/main.yaml'
 
-    def test_multiple_connections(self):
+    def test_multiple_gerrit_connections(self):
         "Test multiple connections to the one gerrit"
 
         A = self.fake_review_gerrit.addFakeChange('org/project', 'master', 'A')
@@ -45,9 +57,184 @@
         self.assertEqual(B.patchsets[-1]['approvals'][0]['by']['username'],
                          'civoter')
 
+    def _test_sql_tables_created(self, metadata_table=None):
+        "Test the tables for storing results are created properly"
+        buildset_table = 'zuul_buildset'
+        build_table = 'zuul_build'
+
+        insp = sa.engine.reflection.Inspector(
+            self.connections['resultsdb'].engine)
+
+        self.assertEqual(9, len(insp.get_columns(buildset_table)))
+        self.assertEqual(10, len(insp.get_columns(build_table)))
+
+    @skip("Disabled for early v3 development")
+    def test_sql_tables_created(self):
+        "Test the default table is created"
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-sql-reporter.yaml')
+        self.sched.reconfigure(self.config)
+        self._test_sql_tables_created()
+
+    def _test_sql_results(self):
+        "Test results are entered into an sql table"
+        # Grab the sa tables
+        reporter = _get_reporter_from_connection_name(
+            self.sched.layout.pipelines['check'].success_actions,
+            'resultsdb'
+        )
+
+        # Add a success result
+        A = self.fake_review_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_review_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        # Add a failed result for a negative score
+        B = self.fake_review_gerrit.addFakeChange('org/project', 'master', 'B')
+        self.worker.addFailTest('project-test1', B)
+        self.fake_review_gerrit.addEvent(B.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        conn = self.connections['resultsdb'].engine.connect()
+        result = conn.execute(
+            sa.sql.select([reporter.connection.zuul_buildset_table]))
+
+        buildsets = result.fetchall()
+        self.assertEqual(2, len(buildsets))
+        buildset0 = buildsets[0]
+        buildset1 = buildsets[1]
+
+        self.assertEqual('check', buildset0['pipeline'])
+        self.assertEqual('org/project', buildset0['project'])
+        self.assertEqual(1, buildset0['change'])
+        self.assertEqual(1, buildset0['patchset'])
+        self.assertEqual(1, buildset0['score'])
+        self.assertEqual('Build succeeded.', buildset0['message'])
+
+        buildset0_builds = conn.execute(
+            sa.sql.select([reporter.connection.zuul_build_table]).
+            where(
+                reporter.connection.zuul_build_table.c.buildset_id ==
+                buildset0['id']
+            )
+        ).fetchall()
+
+        # Check the first result, which should be the project-merge job
+        self.assertEqual('project-merge', buildset0_builds[0]['job_name'])
+        self.assertEqual("SUCCESS", buildset0_builds[0]['result'])
+        self.assertEqual('http://logs.example.com/1/1/check/project-merge/0',
+                         buildset0_builds[0]['log_url'])
+
+        self.assertEqual('check', buildset1['pipeline'])
+        self.assertEqual('org/project', buildset1['project'])
+        self.assertEqual(2, buildset1['change'])
+        self.assertEqual(1, buildset1['patchset'])
+        self.assertEqual(-1, buildset1['score'])
+        self.assertEqual('Build failed.', buildset1['message'])
+
+        buildset1_builds = conn.execute(
+            sa.sql.select([reporter.connection.zuul_build_table]).
+            where(
+                reporter.connection.zuul_build_table.c.buildset_id ==
+                buildset1['id']
+            )
+        ).fetchall()
+
+        # Check the second last result, which should be the project-test1 job
+        # which failed
+        self.assertEqual('project-test1', buildset1_builds[-2]['job_name'])
+        self.assertEqual("FAILURE", buildset1_builds[-2]['result'])
+        self.assertEqual('http://logs.example.com/2/1/check/project-test1/4',
+                         buildset1_builds[-2]['log_url'])
+
+    @skip("Disabled for early v3 development")
+    def test_sql_results(self):
+        "Test results are entered into the default sql table"
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-sql-reporter.yaml')
+        self.sched.reconfigure(self.config)
+        self._test_sql_results()
+
+    @skip("Disabled for early v3 development")
+    def test_multiple_sql_connections(self):
+        "Test putting results in different databases"
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-sql-reporter.yaml')
+        self.sched.reconfigure(self.config)
+
+        # Add a successful result
+        A = self.fake_review_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_review_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        # Add a failed result
+        B = self.fake_review_gerrit.addFakeChange('org/project', 'master', 'B')
+        self.worker.addFailTest('project-test1', B)
+        self.fake_review_gerrit.addEvent(B.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        # Grab the sa tables for resultsdb
+        reporter1 = _get_reporter_from_connection_name(
+            self.sched.layout.pipelines['check'].success_actions,
+            'resultsdb'
+        )
+
+        conn = self.connections['resultsdb'].engine.connect()
+        buildsets_resultsdb = conn.execute(sa.sql.select(
+            [reporter1.connection.zuul_buildset_table])).fetchall()
+        # Should have been 2 buildset reported to the resultsdb (both success
+        # and failure report)
+        self.assertEqual(2, len(buildsets_resultsdb))
+
+        # The first one should have passed
+        self.assertEqual('check', buildsets_resultsdb[0]['pipeline'])
+        self.assertEqual('org/project', buildsets_resultsdb[0]['project'])
+        self.assertEqual(1, buildsets_resultsdb[0]['change'])
+        self.assertEqual(1, buildsets_resultsdb[0]['patchset'])
+        self.assertEqual(1, buildsets_resultsdb[0]['score'])
+        self.assertEqual('Build succeeded.', buildsets_resultsdb[0]['message'])
+
+        # Grab the sa tables for resultsdb_failures
+        reporter2 = _get_reporter_from_connection_name(
+            self.sched.layout.pipelines['check'].failure_actions,
+            'resultsdb_failures'
+        )
+
+        conn = self.connections['resultsdb_failures'].engine.connect()
+        buildsets_resultsdb_failures = conn.execute(sa.sql.select(
+            [reporter2.connection.zuul_buildset_table])).fetchall()
+        # The failure db should only have 1 buildset failed
+        self.assertEqual(1, len(buildsets_resultsdb_failures))
+
+        self.assertEqual('check', buildsets_resultsdb_failures[0]['pipeline'])
+        self.assertEqual(
+            'org/project', buildsets_resultsdb_failures[0]['project'])
+        self.assertEqual(2, buildsets_resultsdb_failures[0]['change'])
+        self.assertEqual(1, buildsets_resultsdb_failures[0]['patchset'])
+        self.assertEqual(-1, buildsets_resultsdb_failures[0]['score'])
+        self.assertEqual(
+            'Build failed.', buildsets_resultsdb_failures[0]['message'])
+
+
+class TestConnectionsBadSQL(ZuulDBTestCase):
+    def setup_config(self, config_file='zuul-connections-bad-sql.conf'):
+        super(TestConnectionsBadSQL, self).setup_config(config_file)
+
+    @skip("Disabled for early v3 development")
+    def test_unable_to_connect(self):
+        "Test the SQL reporter fails gracefully when unable to connect"
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-sql-reporter.yaml')
+        self.sched.reconfigure(self.config)
+
+        # Trigger a reporter. If no errors are raised, the reporter has been
+        # disabled correctly
+        A = self.fake_review_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_review_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
 
 class TestMultipleGerrits(ZuulTestCase):
-
     config_file = 'zuul-connections-multiple-gerrits.conf'
     tenant_config_file = 'config/zuul-connections-multiple-gerrits/main.yaml'
 
diff --git a/tests/unit/test_layoutvalidator.py b/tests/unit/test_layoutvalidator.py
deleted file mode 100644
index 38c8e29..0000000
--- a/tests/unit/test_layoutvalidator.py
+++ /dev/null
@@ -1,81 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2013 OpenStack Foundation
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-from six.moves import configparser as ConfigParser
-import os
-import re
-
-import testtools
-import voluptuous
-import yaml
-
-import zuul.layoutvalidator
-import zuul.lib.connections
-
-FIXTURE_DIR = os.path.join(os.path.dirname(__file__),
-                           'fixtures')
-LAYOUT_RE = re.compile(r'^(good|bad)_.*\.yaml$')
-
-
-class TestLayoutValidator(testtools.TestCase):
-    def setUp(self):
-        self.skip("Disabled for early v3 development")
-
-    def test_layouts(self):
-        """Test layout file validation"""
-        print()
-        errors = []
-        for fn in os.listdir(os.path.join(FIXTURE_DIR, 'layouts')):
-            m = LAYOUT_RE.match(fn)
-            if not m:
-                continue
-            print(fn)
-
-            # Load any .conf file by the same name but .conf extension.
-            config_file = ("%s.conf" %
-                           os.path.join(FIXTURE_DIR, 'layouts',
-                                        fn.split('.yaml')[0]))
-            if not os.path.isfile(config_file):
-                config_file = os.path.join(FIXTURE_DIR, 'layouts',
-                                           'zuul_default.conf')
-            config = ConfigParser.ConfigParser()
-            config.read(config_file)
-            connections = zuul.lib.connections.configure_connections(config)
-
-            layout = os.path.join(FIXTURE_DIR, 'layouts', fn)
-            data = yaml.load(open(layout))
-            validator = zuul.layoutvalidator.LayoutValidator()
-            if m.group(1) == 'good':
-                try:
-                    validator.validate(data, connections)
-                except voluptuous.Invalid as e:
-                    raise Exception(
-                        'Unexpected YAML syntax error in %s:\n  %s' %
-                        (fn, str(e)))
-            else:
-                try:
-                    validator.validate(data, connections)
-                    raise Exception("Expected a YAML syntax error in %s." %
-                                    fn)
-                except voluptuous.Invalid as e:
-                    error = str(e)
-                    print('  ', error)
-                    if error in errors:
-                        raise Exception("Error has already been tested: %s" %
-                                        error)
-                    else:
-                        errors.append(error)
-                    pass
diff --git a/tests/unit/test_model.py b/tests/unit/test_model.py
index 9bd405e..38615a9 100644
--- a/tests/unit/test_model.py
+++ b/tests/unit/test_model.py
@@ -30,14 +30,15 @@
     def setUp(self):
         super(TestJob, self).setUp()
         self.project = model.Project('project', None)
-        self.context = model.SourceContext(self.project, 'master', True)
+        self.context = model.SourceContext(self.project, 'master',
+                                           'test', True)
 
     @property
     def job(self):
         tenant = model.Tenant('tenant')
         layout = model.Layout()
         project = model.Project('project', None)
-        context = model.SourceContext(project, 'master', True)
+        context = model.SourceContext(project, 'master', 'test', True)
         job = configloader.JobParser.fromYaml(tenant, layout, {
             '_source_context': context,
             'name': 'job',
@@ -142,7 +143,7 @@
         layout.addPipeline(pipeline)
         queue = model.ChangeQueue(pipeline)
         project = model.Project('project', None)
-        context = model.SourceContext(project, 'master', True)
+        context = model.SourceContext(project, 'master', 'test', True)
 
         base = configloader.JobParser.fromYaml(tenant, layout, {
             '_source_context': context,
@@ -206,7 +207,7 @@
                 ]
             }
         }])
-        layout.addProjectConfig(project_config, update_pipeline=False)
+        layout.addProjectConfig(project_config)
 
         change = model.Change(project)
         # Test master
@@ -296,7 +297,7 @@
         tenant = model.Tenant('tenant')
         layout = model.Layout()
         project = model.Project('project', None)
-        context = model.SourceContext(project, 'master', True)
+        context = model.SourceContext(project, 'master', 'test', True)
 
         base = configloader.JobParser.fromYaml(tenant, layout, {
             '_source_context': context,
@@ -381,7 +382,7 @@
         layout.addPipeline(pipeline)
         queue = model.ChangeQueue(pipeline)
         project = model.Project('project', None)
-        context = model.SourceContext(project, 'master', True)
+        context = model.SourceContext(project, 'master', 'test', True)
 
         base = configloader.JobParser.fromYaml(tenant, layout, {
             '_source_context': context,
@@ -415,7 +416,7 @@
                 ]
             }
         }])
-        layout.addProjectConfig(project_config, update_pipeline=False)
+        layout.addProjectConfig(project_config)
 
         change = model.Change(project)
         change.branch = 'master'
@@ -454,7 +455,7 @@
         layout.addPipeline(pipeline)
         queue = model.ChangeQueue(pipeline)
         project = model.Project('project', None)
-        context = model.SourceContext(project, 'master', True)
+        context = model.SourceContext(project, 'master', 'test', True)
 
         base = configloader.JobParser.fromYaml(tenant, layout, {
             '_source_context': context,
@@ -480,7 +481,7 @@
                 ]
             }
         }])
-        layout.addProjectConfig(project_config, update_pipeline=False)
+        layout.addProjectConfig(project_config)
 
         change = model.Change(project)
         change.branch = 'master'
@@ -498,7 +499,8 @@
         tenant = model.Tenant('tenant')
         layout = model.Layout()
         base_project = model.Project('base_project', None)
-        base_context = model.SourceContext(base_project, 'master', True)
+        base_context = model.SourceContext(base_project, 'master',
+                                           'test', True)
 
         base = configloader.JobParser.fromYaml(tenant, layout, {
             '_source_context': base_context,
@@ -507,7 +509,8 @@
         layout.addJob(base)
 
         other_project = model.Project('other_project', None)
-        other_context = model.SourceContext(other_project, 'master', True)
+        other_context = model.SourceContext(other_project, 'master',
+                                            'test', True)
         base2 = configloader.JobParser.fromYaml(tenant, layout, {
             '_source_context': other_context,
             'name': 'base',
diff --git a/tests/unit/test_nodepool.py b/tests/unit/test_nodepool.py
index 19c7e05..0a55f9f 100644
--- a/tests/unit/test_nodepool.py
+++ b/tests/unit/test_nodepool.py
@@ -37,6 +37,7 @@
 
         self.zk = zuul.zk.ZooKeeper()
         self.zk.connect(self.zk_config)
+        self.hostname = 'nodepool-test-hostname'
 
         self.provisioned_requests = []
         # This class implements the scheduler methods zuul.nodepool
diff --git a/tests/unit/test_scheduler.py b/tests/unit/test_scheduler.py
index d44369b..1e56fae 100755
--- a/tests/unit/test_scheduler.py
+++ b/tests/unit/test_scheduler.py
@@ -251,9 +251,12 @@
         C.addApproval('code-review', 2)
 
         self.fake_gerrit.addEvent(A.addApproval('approved', 1))
-        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
-        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
+        self.waitUntilSettled()
 
+        self.fake_gerrit.addEvent(B.addApproval('approved', 1))
+        self.waitUntilSettled()
+
+        self.fake_gerrit.addEvent(C.addApproval('approved', 1))
         self.waitUntilSettled()
 
         # There should be one merge job at the head of each queue running
@@ -2922,6 +2925,50 @@
         self.launch_server.release('.*')
         self.waitUntilSettled()
 
+    @skip("Disabled for early v3 development")
+    def test_timer_sshkey(self):
+        "Test that a periodic job can setup SSH key authentication"
+        self.worker.hold_jobs_in_build = True
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-timer.yaml')
+        self.sched.reconfigure(self.config)
+        self.registerJobs()
+
+        # The pipeline triggers every second, so we should have seen
+        # several by now.
+        time.sleep(5)
+        self.waitUntilSettled()
+
+        self.assertEqual(len(self.builds), 2)
+
+        ssh_wrapper = os.path.join(self.git_root, ".ssh_wrapper_gerrit")
+        self.assertTrue(os.path.isfile(ssh_wrapper))
+        with open(ssh_wrapper) as f:
+            ssh_wrapper_content = f.read()
+        self.assertIn("fake_id_rsa", ssh_wrapper_content)
+        # In the unit tests Merger runs in the same process,
+        # so we see its' environment variables
+        self.assertEqual(os.environ['GIT_SSH'], ssh_wrapper)
+
+        self.worker.release('.*')
+        self.waitUntilSettled()
+        self.assertEqual(len(self.history), 2)
+
+        self.assertEqual(self.getJobFromHistory(
+            'project-bitrot-stable-old').result, 'SUCCESS')
+        self.assertEqual(self.getJobFromHistory(
+            'project-bitrot-stable-older').result, 'SUCCESS')
+
+        # Stop queuing timer triggered jobs and let any that may have
+        # queued through so that end of test assertions pass.
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-no-timer.yaml')
+        self.sched.reconfigure(self.config)
+        self.registerJobs()
+        self.waitUntilSettled()
+        self.worker.release('.*')
+        self.waitUntilSettled()
+
     def test_client_enqueue_change(self):
         "Test that the RPC client can enqueue a change"
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
@@ -3538,52 +3585,6 @@
         self.assertNotIn('logs.example.com', B.messages[1])
         self.assertNotIn('SKIPPED', B.messages[1])
 
-    @skip("Disabled for early v3 development")
-    def test_swift_instructions(self):
-        "Test that the correct swift instructions are sent to the workers"
-        self.updateConfigLayout(
-            'tests/fixtures/layout-swift.yaml')
-        self.sched.reconfigure(self.config)
-        self.registerJobs()
-
-        self.launch_server.hold_jobs_in_build = True
-        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
-
-        A.addApproval('code-review', 2)
-        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
-        self.waitUntilSettled()
-
-        self.assertEqual(
-            "https://storage.example.org/V1/AUTH_account/merge_logs/1/1/1/"
-            "gate/test-merge/",
-            self.builds[0].parameters['SWIFT_logs_URL'][:-7])
-        self.assertEqual(5,
-                         len(self.builds[0].parameters['SWIFT_logs_HMAC_BODY'].
-                             split('\n')))
-        self.assertIn('SWIFT_logs_SIGNATURE', self.builds[0].parameters)
-
-        self.assertEqual(
-            "https://storage.example.org/V1/AUTH_account/logs/1/1/1/"
-            "gate/test-test/",
-            self.builds[1].parameters['SWIFT_logs_URL'][:-7])
-        self.assertEqual(5,
-                         len(self.builds[1].parameters['SWIFT_logs_HMAC_BODY'].
-                             split('\n')))
-        self.assertIn('SWIFT_logs_SIGNATURE', self.builds[1].parameters)
-
-        self.assertEqual(
-            "https://storage.example.org/V1/AUTH_account/stash/1/1/1/"
-            "gate/test-test/",
-            self.builds[1].parameters['SWIFT_MOSTLY_URL'][:-7])
-        self.assertEqual(5,
-                         len(self.builds[1].
-                             parameters['SWIFT_MOSTLY_HMAC_BODY'].split('\n')))
-        self.assertIn('SWIFT_MOSTLY_SIGNATURE', self.builds[1].parameters)
-
-        self.launch_server.hold_jobs_in_build = False
-        self.launch_server.release()
-        self.waitUntilSettled()
-
     def test_client_get_running_jobs(self):
         "Test that the RPC client can get a list of running jobs"
         self.launch_server.hold_jobs_in_build = True
diff --git a/tests/unit/test_v3.py b/tests/unit/test_v3.py
index cf88265..3a4e164 100644
--- a/tests/unit/test_v3.py
+++ b/tests/unit/test_v3.py
@@ -116,6 +116,7 @@
             dict(name='project-test2', result='SUCCESS', changes='1,1')])
 
         self.fake_gerrit.addEvent(A.getChangeMergedEvent())
+        self.waitUntilSettled()
 
         # Now that the config change is landed, it should be live for
         # subsequent changes.
@@ -164,6 +165,7 @@
         self.assertHistory([
             dict(name='project-test2', result='SUCCESS', changes='1,1')])
         self.fake_gerrit.addEvent(A.getChangeMergedEvent())
+        self.waitUntilSettled()
 
         # The config change should not affect master.
         B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
@@ -185,7 +187,7 @@
             dict(name='project-test1', result='SUCCESS', changes='2,1'),
             dict(name='project-test2', result='SUCCESS', changes='3,1')])
 
-    def test_dynamic_syntax_error(self):
+    def test_untrusted_syntax_error(self):
         in_repo_conf = textwrap.dedent(
             """
             - job:
@@ -206,6 +208,47 @@
         self.assertIn('syntax error', A.messages[1],
                       "A should have a syntax error reported")
 
+    def test_trusted_syntax_error(self):
+        in_repo_conf = textwrap.dedent(
+            """
+            - job:
+                name: project-test2
+                foo: error
+            """)
+
+        file_dict = {'zuul.yaml': in_repo_conf}
+        A = self.fake_gerrit.addFakeChange('common-config', 'master', 'A',
+                                           files=file_dict)
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and failure")
+        self.assertIn('syntax error', A.messages[1],
+                      "A should have a syntax error reported")
+
+    def test_untrusted_yaml_error(self):
+        in_repo_conf = textwrap.dedent(
+            """
+            - job:
+            foo: error
+            """)
+
+        file_dict = {'.zuul.yaml': in_repo_conf}
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A',
+                                           files=file_dict)
+        A.addApproval('code-review', 2)
+        self.fake_gerrit.addEvent(A.addApproval('approved', 1))
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 2,
+                         "A should report start and failure")
+        self.assertIn('syntax error', A.messages[1],
+                      "A should have a syntax error reported")
+
 
 class TestAnsible(AnsibleZuulTestCase):
     # A temporary class to hold new tests while others are disabled
diff --git a/tools/test-setup.sh b/tools/test-setup.sh
new file mode 100755
index 0000000..f4a0458
--- /dev/null
+++ b/tools/test-setup.sh
@@ -0,0 +1,33 @@
+#!/bin/bash -xe
+
+# This script will be run by OpenStack CI before unit tests are run,
+# it sets up the test system as needed.
+# Developers should setup their test systems in a similar way.
+
+# This setup needs to be run as a user that can run sudo.
+
+# The root password for the MySQL database; pass it in via
+# MYSQL_ROOT_PW.
+DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave}
+
+# This user and its password are used by the tests, if you change it,
+# your tests might fail.
+DB_USER=openstack_citest
+DB_PW=openstack_citest
+
+sudo -H mysqladmin -u root password $DB_ROOT_PW
+
+# It's best practice to remove anonymous users from the database.  If
+# a anonymous user exists, then it matches first for connections and
+# other connections from that host will not work.
+sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e "
+    DELETE FROM mysql.user WHERE User='';
+    FLUSH PRIVILEGES;
+    GRANT ALL PRIVILEGES ON *.*
+        TO '$DB_USER'@'%' identified by '$DB_PW' WITH GRANT OPTION;"
+
+# Now create our database.
+mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e "
+    SET default_storage_engine=MYISAM;
+    DROP DATABASE IF EXISTS openstack_citest;
+    CREATE DATABASE openstack_citest CHARACTER SET utf8;"
diff --git a/zuul/alembic/sql_reporter/README b/zuul/alembic/sql_reporter/README
new file mode 100644
index 0000000..98e4f9c
--- /dev/null
+++ b/zuul/alembic/sql_reporter/README
@@ -0,0 +1 @@
+Generic single-database configuration.
\ No newline at end of file
diff --git a/zuul/alembic/sql_reporter/env.py b/zuul/alembic/sql_reporter/env.py
new file mode 100644
index 0000000..56a5b7e
--- /dev/null
+++ b/zuul/alembic/sql_reporter/env.py
@@ -0,0 +1,70 @@
+from __future__ import with_statement
+from alembic import context
+from sqlalchemy import engine_from_config, pool
+# from logging.config import fileConfig
+
+# this is the Alembic Config object, which provides
+# access to the values within the .ini file in use.
+config = context.config
+
+# Interpret the config file for Python logging.
+# This line sets up loggers basically.
+# fileConfig(config.config_file_name)
+
+# add your model's MetaData object here
+# for 'autogenerate' support
+# from myapp import mymodel
+# target_metadata = mymodel.Base.metadata
+target_metadata = None
+
+# other values from the config, defined by the needs of env.py,
+# can be acquired:
+# my_important_option = config.get_main_option("my_important_option")
+# ... etc.
+
+
+def run_migrations_offline():
+    """Run migrations in 'offline' mode.
+
+    This configures the context with just a URL
+    and not an Engine, though an Engine is acceptable
+    here as well.  By skipping the Engine creation
+    we don't even need a DBAPI to be available.
+
+    Calls to context.execute() here emit the given string to the
+    script output.
+
+    """
+    url = config.get_main_option("sqlalchemy.url")
+    context.configure(
+        url=url, target_metadata=target_metadata, literal_binds=True)
+
+    with context.begin_transaction():
+        context.run_migrations()
+
+
+def run_migrations_online():
+    """Run migrations in 'online' mode.
+
+    In this scenario we need to create an Engine
+    and associate a connection with the context.
+
+    """
+    connectable = engine_from_config(
+        config.get_section(config.config_ini_section),
+        prefix='sqlalchemy.',
+        poolclass=pool.NullPool)
+
+    with connectable.connect() as connection:
+        context.configure(
+            connection=connection,
+            target_metadata=target_metadata
+        )
+
+        with context.begin_transaction():
+            context.run_migrations()
+
+if context.is_offline_mode():
+    run_migrations_offline()
+else:
+    run_migrations_online()
diff --git a/zuul/alembic/sql_reporter/script.py.mako b/zuul/alembic/sql_reporter/script.py.mako
new file mode 100644
index 0000000..43c0940
--- /dev/null
+++ b/zuul/alembic/sql_reporter/script.py.mako
@@ -0,0 +1,24 @@
+"""${message}
+
+Revision ID: ${up_revision}
+Revises: ${down_revision | comma,n}
+Create Date: ${create_date}
+
+"""
+
+# revision identifiers, used by Alembic.
+revision = ${repr(up_revision)}
+down_revision = ${repr(down_revision)}
+branch_labels = ${repr(branch_labels)}
+depends_on = ${repr(depends_on)}
+
+from alembic import op
+import sqlalchemy as sa
+${imports if imports else ""}
+
+def upgrade():
+    ${upgrades if upgrades else "pass"}
+
+
+def downgrade():
+    ${downgrades if downgrades else "pass"}
diff --git a/zuul/alembic/sql_reporter/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py b/zuul/alembic/sql_reporter/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py
new file mode 100644
index 0000000..783196f
--- /dev/null
+++ b/zuul/alembic/sql_reporter/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py
@@ -0,0 +1,53 @@
+"""Set up initial reporter tables
+
+Revision ID: 4d3ebd7f06b9
+Revises:
+Create Date: 2015-12-06 15:27:38.080020
+
+"""
+
+# revision identifiers, used by Alembic.
+revision = '4d3ebd7f06b9'
+down_revision = None
+branch_labels = None
+depends_on = None
+
+from alembic import op
+import sqlalchemy as sa
+
+BUILDSET_TABLE = 'zuul_buildset'
+BUILD_TABLE = 'zuul_build'
+
+
+def upgrade():
+    op.create_table(
+        BUILDSET_TABLE,
+        sa.Column('id', sa.Integer, primary_key=True),
+        sa.Column('zuul_ref', sa.String(255)),
+        sa.Column('pipeline', sa.String(255)),
+        sa.Column('project', sa.String(255)),
+        sa.Column('change', sa.Integer, nullable=True),
+        sa.Column('patchset', sa.Integer, nullable=True),
+        sa.Column('ref', sa.String(255)),
+        sa.Column('score', sa.Integer),
+        sa.Column('message', sa.TEXT()),
+    )
+
+    op.create_table(
+        BUILD_TABLE,
+        sa.Column('id', sa.Integer, primary_key=True),
+        sa.Column('buildset_id', sa.Integer,
+                  sa.ForeignKey(BUILDSET_TABLE + ".id")),
+        sa.Column('uuid', sa.String(36)),
+        sa.Column('job_name', sa.String(255)),
+        sa.Column('result', sa.String(255)),
+        sa.Column('start_time', sa.DateTime()),
+        sa.Column('end_time', sa.DateTime()),
+        sa.Column('voting', sa.Boolean),
+        sa.Column('log_url', sa.String(255)),
+        sa.Column('node_name', sa.String(255)),
+    )
+
+
+def downgrade():
+    raise Exception("Downgrades not supported")
diff --git a/zuul/alembic_reporter.ini b/zuul/alembic_reporter.ini
new file mode 100644
index 0000000..b7f787c
--- /dev/null
+++ b/zuul/alembic_reporter.ini
@@ -0,0 +1,69 @@
+# A generic, single database configuration.
+
+[alembic]
+# path to migration scripts
+# NOTE(jhesketh): We may use alembic for other db components of zuul in the
+# future. Use a sub-folder for the reporters own versions.
+script_location = alembic/sql_reporter
+
+# template used to generate migration files
+# file_template = %%(rev)s_%%(slug)s
+
+# max length of characters to apply to the
+# "slug" field
+#truncate_slug_length = 40
+
+# set to 'true' to run the environment during
+# the 'revision' command, regardless of autogenerate
+# revision_environment = false
+
+# set to 'true' to allow .pyc and .pyo files without
+# a source .py file to be detected as revisions in the
+# versions/ directory
+# sourceless = false
+
+# version location specification; this defaults
+# to alembic/versions.  When using multiple version
+# directories, initial revisions must be specified with --version-path
+# version_locations = %(here)s/bar %(here)s/bat alembic/versions
+
+# the output encoding used when revision files
+# are written from script.py.mako
+# output_encoding = utf-8
+
+sqlalchemy.url = mysql+pymysql://user@localhost/database
+
+# Logging configuration
+[loggers]
+keys = root,sqlalchemy,alembic
+
+[handlers]
+keys = console
+
+[formatters]
+keys = generic
+
+[logger_root]
+level = WARN
+handlers = console
+qualname =
+
+[logger_sqlalchemy]
+level = WARN
+handlers =
+qualname = sqlalchemy.engine
+
+[logger_alembic]
+level = INFO
+handlers =
+qualname = alembic
+
+[handler_console]
+class = StreamHandler
+args = (sys.stderr,)
+level = NOTSET
+formatter = generic
+
+[formatter_generic]
+format = %(levelname)-5.5s [%(name)s] %(message)s
+datefmt = %H:%M:%S
diff --git a/zuul/cmd/scheduler.py b/zuul/cmd/scheduler.py
index 9a8b24f..8b8fb57 100755
--- a/zuul/cmd/scheduler.py
+++ b/zuul/cmd/scheduler.py
@@ -127,7 +127,6 @@
         import zuul.launcher.client
         import zuul.merger.client
         import zuul.nodepool
-        import zuul.lib.swift
         import zuul.webapp
         import zuul.rpclistener
         import zuul.zk
@@ -141,11 +140,8 @@
         self.log = logging.getLogger("zuul.Scheduler")
 
         self.sched = zuul.scheduler.Scheduler(self.config)
-        # TODO(jhesketh): Move swift into a connection?
-        self.swift = zuul.lib.swift.Swift(self.config)
 
-        gearman = zuul.launcher.client.LaunchClient(self.config, self.sched,
-                                                    self.swift)
+        gearman = zuul.launcher.client.LaunchClient(self.config, self.sched)
         merger = zuul.merger.client.MergeClient(self.config, self.sched)
         nodepool = zuul.nodepool.Nodepool(self.sched)
 
diff --git a/zuul/configloader.py b/zuul/configloader.py
index 42616a8..2c31341 100644
--- a/zuul/configloader.py
+++ b/zuul/configloader.py
@@ -69,6 +69,31 @@
         raise ConfigurationSyntaxError(m)
 
 
+class ZuulSafeLoader(yaml.SafeLoader):
+    def __init__(self, stream, context):
+        super(ZuulSafeLoader, self).__init__(stream)
+        self.name = str(context)
+
+
+def safe_load_yaml(stream, context):
+    loader = ZuulSafeLoader(stream, context)
+    try:
+        return loader.get_single_data()
+    except yaml.YAMLError as e:
+        m = """
+Zuul encountered a syntax error while parsing its configuration in the
+repo {repo} on branch {branch}.  The error was:
+
+  {error}
+"""
+        m = m.format(repo=context.project.name,
+                     branch=context.branch,
+                     error=str(e))
+        raise ConfigurationSyntaxError(m)
+    finally:
+        loader.dispose()
+
+
 class NodeSetParser(object):
     @staticmethod
     def getSchema():
@@ -97,20 +122,8 @@
 class JobParser(object):
     @staticmethod
     def getSchema():
-        swift_tmpurl = {vs.Required('name'): str,
-                        'container': str,
-                        'expiry': int,
-                        'max_file_size': int,
-                        'max-file-size': int,
-                        'max_file_count': int,
-                        'max-file-count': int,
-                        'logserver_prefix': str,
-                        'logserver-prefix': str,
-                        }
-
         auth = {'secrets': to_list(str),
                 'inherit': bool,
-                'swift-tmpurl': to_list(swift_tmpurl),
                 }
 
         node = {vs.Required('name'): str,
@@ -381,10 +394,7 @@
             with configuration_exceptions('project', conf):
                 ProjectParser.getSchema(layout)(conf)
         project = model.ProjectConfig(conf_list[0]['name'])
-        mode = conf_list[0].get('merge-mode', 'merge-resolve')
-        project.merge_mode = model.MERGER_MAP[mode]
 
-        # TODOv3(jeblair): deal with merge mode setting on multi branches
         configs = []
         for conf in conf_list:
             # Make a copy since we modify this later via pop
@@ -398,6 +408,15 @@
             configs.extend([layout.project_templates[name]
                             for name in conf_templates])
             configs.append(project_template)
+            mode = conf.get('merge-mode')
+            if mode and project.merge_mode is None:
+                # Set the merge mode to the first one that we find and
+                # ignore subsequent settings.
+                project.merge_mode = model.MERGER_MAP[mode]
+        if project.merge_mode is None:
+            # If merge mode was not specified in any project stanza,
+            # set it to the default.
+            project.merge_mode = model.MERGER_MAP['merge-resolve']
         for pipeline in layout.pipelines.values():
             project_pipeline = model.ProjectPipelineConfig()
             project_pipeline.job_tree = model.JobTree(None)
@@ -589,12 +608,10 @@
             )
             manager.changeish_filters.append(f)
 
-        for trigger_name, trigger_config\
-            in conf.get('trigger').items():
+        for trigger_name, trigger_config in conf.get('trigger').items():
             trigger = connections.getTrigger(trigger_name, trigger_config)
             pipeline.triggers.append(trigger)
 
-            # TODO: move
             manager.event_filters += trigger.getEventFilters(
                 conf['trigger'][trigger_name])
 
@@ -695,7 +712,8 @@
             url = source.getGitUrl(project)
             job = merger.getFiles(project.name, url, 'master',
                                   files=['zuul.yaml', '.zuul.yaml'])
-            job.source_context = model.SourceContext(project, 'master', True)
+            job.source_context = model.SourceContext(project, 'master',
+                                                     '', True)
             jobs.append(job)
 
         for (source, project) in project_repos:
@@ -721,8 +739,8 @@
                     model.UnparsedTenantConfig()
                 job = merger.getFiles(project.name, url, branch,
                                       files=['.zuul.yaml'])
-                job.source_context = model.SourceContext(project,
-                                                         branch, False)
+                job.source_context = model.SourceContext(
+                    project, branch, '', False)
                 jobs.append(job)
 
         for job in jobs:
@@ -732,11 +750,20 @@
             # This is important for correct inheritance.
             TenantParser.log.debug("Waiting for cat job %s" % (job,))
             job.wait()
+            loaded = False
             for fn in ['zuul.yaml', '.zuul.yaml']:
                 if job.files.get(fn):
+                    # Don't load from more than one file in a repo-branch
+                    if loaded:
+                        TenantParser.log.warning(
+                            "Multiple configuration files in %s" %
+                            (job.source_context,))
+                        continue
+                    loaded = True
+                    job.source_context.path = fn
                     TenantParser.log.info(
-                        "Loading configuration from %s/%s" %
-                        (job.source_context, fn))
+                        "Loading configuration from %s" %
+                        (job.source_context,))
                     project = job.source_context.project
                     branch = job.source_context.branch
                     if job.source_context.trusted:
@@ -756,7 +783,7 @@
     def _parseConfigRepoLayout(data, source_context):
         # This is the top-level configuration for a tenant.
         config = model.UnparsedTenantConfig()
-        config.extend(yaml.load(data), source_context)
+        config.extend(safe_load_yaml(data, source_context), source_context)
         return config
 
     @staticmethod
@@ -764,8 +791,7 @@
         # TODOv3(jeblair): this should implement some rules to protect
         # aspects of the config that should not be changed in-repo
         config = model.UnparsedTenantConfig()
-        config.extend(yaml.load(data), source_context)
-
+        config.extend(safe_load_yaml(data, source_context), source_context)
         return config
 
     @staticmethod
@@ -814,7 +840,7 @@
         config_path = self.expandConfigPath(config_path)
         with open(config_path) as config_file:
             self.log.info("Loading configuration from %s" % (config_path,))
-            data = yaml.load(config_file)
+            data = yaml.safe_load(config_file)
         config = model.UnparsedAbideConfig()
         config.extend(data)
         base = os.path.dirname(os.path.realpath(config_path))
@@ -841,23 +867,50 @@
         new_abide.tenants[tenant.name] = new_tenant
         return new_abide
 
-    def createDynamicLayout(self, tenant, files):
-        config = tenant.config_repos_config.copy()
-        for source, project in tenant.project_repos:
-            for branch in source.getProjectBranches(project):
-                data = files.getFile(project.name, branch, '.zuul.yaml')
-                if data:
-                    source_context = model.SourceContext(project,
-                                                         branch, False)
-                    incdata = TenantParser._parseProjectRepoLayout(
+    def _loadDynamicProjectData(self, config, source, project, files,
+                                config_repo):
+        for branch in source.getProjectBranches(project):
+            data = None
+            if config_repo:
+                fn = 'zuul.yaml'
+                data = files.getFile(project.name, branch, fn)
+            if not data:
+                fn = '.zuul.yaml'
+                data = files.getFile(project.name, branch, fn)
+            if data:
+                source_context = model.SourceContext(project, branch,
+                                                     fn, config_repo)
+                if config_repo:
+                    incdata = TenantParser._parseConfigRepoLayout(
                         data, source_context)
                 else:
-                    incdata = project.unparsed_branch_config[branch]
-                if not incdata:
-                    continue
-                config.extend(incdata)
+                    incdata = TenantParser._parseProjectRepoLayout(
+                        data, source_context)
+            else:
+                incdata = project.unparsed_branch_config.get(branch)
+            if not incdata:
+                continue
+            config.extend(incdata)
+
+    def createDynamicLayout(self, tenant, files, include_config_repos=False):
+        if include_config_repos:
+            config = model.UnparsedTenantConfig()
+            for source, project in tenant.config_repos:
+                self._loadDynamicProjectData(config, source, project,
+                                             files, True)
+        else:
+            config = tenant.config_repos_config.copy()
+        for source, project in tenant.project_repos:
+            self._loadDynamicProjectData(config, source, project,
+                                         files, False)
+
         layout = model.Layout()
-        # TODOv3(jeblair): copying the pipelines could be dangerous/confusing.
+        # NOTE: the actual pipeline objects (complete with queues and
+        # enqueued items) are copied by reference here.  This allows
+        # our shadow dynamic configuration to continue to interact
+        # with all the other changes, each of which may have their own
+        # version of reality.  We do not support creating, updating,
+        # or deleting pipelines in dynamic layout changes.
         layout.pipelines = tenant.layout.pipelines
 
         for config_job in config.jobs:
@@ -869,5 +922,5 @@
 
         for config_project in config.projects.values():
             layout.addProjectConfig(ProjectParser.fromYaml(
-                tenant, layout, config_project), update_pipeline=False)
+                tenant, layout, config_project))
         return layout
diff --git a/zuul/connection/sql.py b/zuul/connection/sql.py
new file mode 100644
index 0000000..479ee44
--- /dev/null
+++ b/zuul/connection/sql.py
@@ -0,0 +1,104 @@
+# Copyright 2014 Rackspace Australia
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+import alembic
+import alembic.config
+import sqlalchemy as sa
+import voluptuous as v
+
+from zuul.connection import BaseConnection
+
+BUILDSET_TABLE = 'zuul_buildset'
+BUILD_TABLE = 'zuul_build'
+
+
+class SQLConnection(BaseConnection):
+    driver_name = 'sql'
+    log = logging.getLogger("connection.sql")
+
+    def __init__(self, connection_name, connection_config):
+
+        super(SQLConnection, self).__init__(connection_name, connection_config)
+
+        self.dburi = None
+        self.engine = None
+        self.connection = None
+        self.tables_established = False
+        try:
+            self.dburi = self.connection_config.get('dburi')
+            self.engine = sa.create_engine(self.dburi)
+            self._migrate()
+            self._setup_tables()
+            self.tables_established = True
+        except sa.exc.NoSuchModuleError:
+            self.log.exception(
+                "The required module for the dburi dialect isn't available. "
+                "SQL connection %s will be unavailable." % connection_name)
+        except sa.exc.OperationalError:
+            self.log.exception(
+                "Unable to connect to the database or establish the required "
+                "tables. Reporter %s is disabled" % self)
+
+    def _migrate(self):
+        """Perform the alembic migrations for this connection"""
+        with self.engine.begin() as conn:
+            context = alembic.migration.MigrationContext.configure(conn)
+            current_rev = context.get_current_revision()
+            self.log.debug('Current migration revision: %s' % current_rev)
+
+            config = alembic.config.Config()
+            config.set_main_option("script_location",
+                                   "zuul:alembic/sql_reporter")
+            config.set_main_option("sqlalchemy.url",
+                                   self.connection_config.get('dburi'))
+
+            alembic.command.upgrade(config, 'head')
+
+    def _setup_tables(self):
+        metadata = sa.MetaData()
+
+        self.zuul_buildset_table = sa.Table(
+            BUILDSET_TABLE, metadata,
+            sa.Column('id', sa.Integer, primary_key=True),
+            sa.Column('zuul_ref', sa.String(255)),
+            sa.Column('pipeline', sa.String(255)),
+            sa.Column('project', sa.String(255)),
+            sa.Column('change', sa.Integer, nullable=True),
+            sa.Column('patchset', sa.Integer, nullable=True),
+            sa.Column('ref', sa.String(255)),
+            sa.Column('score', sa.Integer),
+            sa.Column('message', sa.TEXT()),
+        )
+
+        self.zuul_build_table = sa.Table(
+            BUILD_TABLE, metadata,
+            sa.Column('id', sa.Integer, primary_key=True),
+            sa.Column('buildset_id', sa.Integer,
+                      sa.ForeignKey(BUILDSET_TABLE + ".id")),
+            sa.Column('uuid', sa.String(36)),
+            sa.Column('job_name', sa.String(255)),
+            sa.Column('result', sa.String(255)),
+            sa.Column('start_time', sa.DateTime()),
+            sa.Column('end_time', sa.DateTime()),
+            sa.Column('voting', sa.Boolean),
+            sa.Column('log_url', sa.String(255)),
+            sa.Column('node_name', sa.String(255)),
+        )
+
+
+def getSchema():
+    sql_connection = v.Any(str, v.Schema({}, extra=True))
+    return sql_connection
diff --git a/zuul/driver/gerrit/gerritconnection.py b/zuul/driver/gerrit/gerritconnection.py
index d65e6a8..286006f 100644
--- a/zuul/driver/gerrit/gerritconnection.py
+++ b/zuul/driver/gerrit/gerritconnection.py
@@ -79,7 +79,7 @@
         if change:
             event.project_name = change.get('project')
             event.branch = change.get('branch')
-            event.change_number = change.get('number')
+            event.change_number = str(change.get('number'))
             event.change_url = change.get('url')
             patchset = data.get('patchSet')
             if patchset:
@@ -155,13 +155,14 @@
     poll_timeout = 500
 
     def __init__(self, gerrit_connection, username, hostname, port=29418,
-                 keyfile=None):
+                 keyfile=None, keepalive=60):
         threading.Thread.__init__(self)
         self.username = username
         self.keyfile = keyfile
         self.hostname = hostname
         self.port = port
         self.gerrit_connection = gerrit_connection
+        self.keepalive = keepalive
         self._stopped = False
 
     def _read(self, fd):
@@ -192,6 +193,8 @@
                            username=self.username,
                            port=self.port,
                            key_filename=self.keyfile)
+            transport = client.get_transport()
+            transport.set_keepalive(self.keepalive)
 
             stdin, stdout, stderr = client.exec_command("gerrit stream-events")
 
@@ -228,7 +231,7 @@
 
 class GerritConnection(BaseConnection):
     driver_name = 'gerrit'
-    log = logging.getLogger("connection.gerrit")
+    log = logging.getLogger("zuul.GerritConnection")
     depends_on_re = re.compile(r"^Depends-On: (I[0-9a-f]{40})\s*$",
                                re.MULTILINE | re.IGNORECASE)
     replication_timeout = 300
@@ -248,6 +251,7 @@
         self.server = self.connection_config.get('server')
         self.port = int(self.connection_config.get('port', 29418))
         self.keyfile = self.connection_config.get('sshkey', None)
+        self.keepalive = int(self.connection_config.get('keepalive', 60))
         self.watcher_thread = None
         self.event_queue = Queue.Queue()
         self.client = None
@@ -682,6 +686,8 @@
                        username=self.user,
                        port=self.port,
                        key_filename=self.keyfile)
+        transport = client.get_transport()
+        transport.set_keepalive(self.keepalive)
         self.client = client
 
     def _ssh(self, command, stdin_data=None):
@@ -786,7 +792,8 @@
             self.user,
             self.server,
             self.port,
-            keyfile=self.keyfile)
+            keyfile=self.keyfile,
+            keepalive=self.keepalive)
         self.watcher_thread.start()
 
     def _stop_event_connector(self):
diff --git a/zuul/driver/gerrit/gerritreporter.py b/zuul/driver/gerrit/gerritreporter.py
index e2a5b94..d132d65 100644
--- a/zuul/driver/gerrit/gerritreporter.py
+++ b/zuul/driver/gerrit/gerritreporter.py
@@ -23,7 +23,7 @@
     """Sends off reports to Gerrit."""
 
     name = 'gerrit'
-    log = logging.getLogger("zuul.reporter.gerrit.Reporter")
+    log = logging.getLogger("zuul.GerritReporter")
 
     def report(self, source, pipeline, item):
         """Send a message to gerrit."""
diff --git a/zuul/driver/gerrit/gerrittrigger.py b/zuul/driver/gerrit/gerrittrigger.py
index 8a3fe42..c678bce 100644
--- a/zuul/driver/gerrit/gerrittrigger.py
+++ b/zuul/driver/gerrit/gerrittrigger.py
@@ -20,7 +20,7 @@
 
 class GerritTrigger(BaseTrigger):
     name = 'gerrit'
-    log = logging.getLogger("zuul.trigger.Gerrit")
+    log = logging.getLogger("zuul.GerritTrigger")
 
     def getEventFilters(self, trigger_conf):
         def toList(item):
diff --git a/zuul/driver/smtp/smtpconnection.py b/zuul/driver/smtp/smtpconnection.py
index 0172396..6338cd5 100644
--- a/zuul/driver/smtp/smtpconnection.py
+++ b/zuul/driver/smtp/smtpconnection.py
@@ -23,7 +23,7 @@
 
 class SMTPConnection(BaseConnection):
     driver_name = 'smtp'
-    log = logging.getLogger("connection.smtp")
+    log = logging.getLogger("zuul.SMTPConnection")
 
     def __init__(self, driver, connection_name, connection_config):
         super(SMTPConnection, self).__init__(driver, connection_name,
diff --git a/zuul/driver/smtp/smtpreporter.py b/zuul/driver/smtp/smtpreporter.py
index cf96e9f..dd618ef 100644
--- a/zuul/driver/smtp/smtpreporter.py
+++ b/zuul/driver/smtp/smtpreporter.py
@@ -22,7 +22,7 @@
     """Sends off reports to emails via SMTP."""
 
     name = 'smtp'
-    log = logging.getLogger("zuul.reporter.smtp.Reporter")
+    log = logging.getLogger("zuul.SMTPReporter")
 
     def report(self, source, pipeline, item):
         """Send the compiled report message via smtp."""
diff --git a/zuul/driver/timer/__init__.py b/zuul/driver/timer/__init__.py
index a188a26..3ce0b8d 100644
--- a/zuul/driver/timer/__init__.py
+++ b/zuul/driver/timer/__init__.py
@@ -26,8 +26,7 @@
 
 class TimerDriver(Driver, TriggerInterface):
     name = 'timer'
-
-    log = logging.getLogger("zuul.Timer")
+    log = logging.getLogger("zuul.TimerDriver")
 
     def __init__(self):
         self.apsched = BackgroundScheduler()
diff --git a/zuul/launcher/ansiblelaunchserver.py b/zuul/launcher/ansiblelaunchserver.py
index 5935c68..875cf2b 100644
--- a/zuul/launcher/ansiblelaunchserver.py
+++ b/zuul/launcher/ansiblelaunchserver.py
@@ -46,7 +46,7 @@
 ANSIBLE_WATCHDOG_GRACE = 5 * 60
 ANSIBLE_DEFAULT_TIMEOUT = 2 * 60 * 60
 ANSIBLE_DEFAULT_PRE_TIMEOUT = 10 * 60
-ANSIBLE_DEFAULT_POST_TIMEOUT = 10 * 60
+ANSIBLE_DEFAULT_POST_TIMEOUT = 30 * 60
 
 
 COMMANDS = ['reconfigure', 'stop', 'pause', 'unpause', 'release', 'graceful',
@@ -822,7 +822,7 @@
         result = None
         self._sent_complete_event = False
         self._aborted_job = False
-        self._watchog_timeout = False
+        self._watchdog_timeout = False
 
         try:
             self.sendStartEvent(job_name, args)
@@ -1351,7 +1351,10 @@
                         when='success|bool')
             blocks[0].insert(0, task)
             task = dict(zuul_log=dict(msg="Job complete, result: FAILURE"),
-                        when='not success|bool')
+                        when='not success|bool and not timedout|bool')
+            blocks[0].insert(0, task)
+            task = dict(zuul_log=dict(msg="Job timed out, result: FAILURE"),
+                        when='not success|bool and timedout|bool')
             blocks[0].insert(0, task)
 
             tasks.append(dict(block=blocks[0],
@@ -1509,6 +1512,7 @@
 
         cmd = ['ansible-playbook', jobdir.post_playbook,
                '-e', 'success=%s' % success,
+               '-e', 'timedout=%s' % self._watchdog_timeout,
                '-e@%s' % jobdir.vars,
                verbose]
         self.log.debug("Ansible post command: %s" % (cmd,))
diff --git a/zuul/launcher/client.py b/zuul/launcher/client.py
index 46644a6..52e4397 100644
--- a/zuul/launcher/client.py
+++ b/zuul/launcher/client.py
@@ -17,7 +17,6 @@
 import json
 import logging
 import os
-import six
 import time
 import threading
 from uuid import uuid4
@@ -149,10 +148,9 @@
     log = logging.getLogger("zuul.LaunchClient")
     negative_function_cache_ttl = 5
 
-    def __init__(self, config, sched, swift):
+    def __init__(self, config, sched):
         self.config = config
         self.sched = sched
-        self.swift = swift
         self.builds = {}
         self.meta_jobs = {}  # A list of meta-jobs like stop or describe
 
@@ -211,42 +209,6 @@
         self.log.debug("Function %s is not registered" % name)
         return False
 
-    def updateBuildParams(self, job, item, params):
-        """Allow the job to modify and add build parameters"""
-
-        # NOTE(jhesketh): The params need to stay in a key=value data pair
-        # as workers cannot necessarily handle lists.
-
-        if 'swift' in job.auth and self.swift.connection:
-
-            for name, s in job.swift.items():
-                swift_instructions = {}
-                s_config = {}
-                s_config.update((k, v.format(item=item, job=job,
-                                             change=item.change))
-                                if isinstance(v, six.string_types)
-                                else (k, v)
-                                for k, v in s.items())
-
-                (swift_instructions['URL'],
-                 swift_instructions['HMAC_BODY'],
-                 swift_instructions['SIGNATURE']) = \
-                    self.swift.generate_form_post_middleware_params(
-                        params['LOG_PATH'], **s_config)
-
-                if 'logserver_prefix' in s_config:
-                    swift_instructions['LOGSERVER_PREFIX'] = \
-                        s_config['logserver_prefix']
-                elif self.config.has_option('swift',
-                                            'default_logserver_prefix'):
-                    swift_instructions['LOGSERVER_PREFIX'] = \
-                        self.config.get('swift', 'default_logserver_prefix')
-
-                # Create a set of zuul instructions for each instruction-set
-                # given  in the form of NAME_PARAMETER=VALUE
-                for key, value in swift_instructions.items():
-                    params['_'.join(['SWIFT', name, key])] = value
-
     def launch(self, job, item, pipeline, dependent_items=[]):
         uuid = str(uuid4().hex)
         self.log.info(
@@ -311,9 +273,6 @@
         params['BASE_LOG_PATH'] = item.change.getBasePath()
         params['LOG_PATH'] = destination_path
 
-        # Allow the job to update the params
-        self.updateBuildParams(job, item, params)
-
         # This is what we should be heading toward for parameters:
 
         # required:
diff --git a/zuul/layoutvalidator.py b/zuul/layoutvalidator.py
deleted file mode 100644
index 32b6a9e..0000000
--- a/zuul/layoutvalidator.py
+++ /dev/null
@@ -1,371 +0,0 @@
-# Copyright 2013 OpenStack Foundation
-# Copyright 2013 Antoine "hashar" Musso
-# Copyright 2013 Wikimedia Foundation Inc.
-# Copyright 2014 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import voluptuous as v
-import string
-
-
-# Several forms accept either a single item or a list, this makes
-# specifying that in the schema easy (and explicit).
-def toList(x):
-    return v.Any([x], x)
-
-
-class ConfigSchema(object):
-    tenant_source = v.Schema({'repos': [str]})
-
-    def validateTenantSources(self, value, path=[]):
-        if isinstance(value, dict):
-            for k, val in value.items():
-                self.validateTenantSource(val, path + [k])
-        else:
-            raise v.Invalid("Invalid tenant source", path)
-
-    def validateTenantSource(self, value, path=[]):
-        # TODOv3(jeblair): validate against connections
-        self.tenant_source(value)
-
-    def getSchema(self, data, connections=None):
-        tenant = {v.Required('name'): str,
-                  'include': toList(str),
-                  'source': self.validateTenantSources}
-
-        schema = v.Schema({'tenants': [tenant]})
-
-        return schema
-
-
-class LayoutSchema(object):
-    manager = v.Any('IndependentPipelineManager',
-                    'DependentPipelineManager')
-
-    precedence = v.Any('normal', 'low', 'high')
-
-    approval = v.Schema({'username': str,
-                         'email-filter': str,
-                         'email': str,
-                         'older-than': str,
-                         'newer-than': str,
-                         }, extra=True)
-
-    require = {'approval': toList(approval),
-               'open': bool,
-               'current-patchset': bool,
-               'status': toList(str)}
-
-    reject = {'approval': toList(approval)}
-
-    window = v.All(int, v.Range(min=0))
-    window_floor = v.All(int, v.Range(min=1))
-    window_type = v.Any('linear', 'exponential')
-    window_factor = v.All(int, v.Range(min=1))
-
-    pipeline = {v.Required('name'): str,
-                v.Required('manager'): manager,
-                'source': str,
-                'precedence': precedence,
-                'description': str,
-                'require': require,
-                'reject': reject,
-                'success-message': str,
-                'failure-message': str,
-                'merge-failure-message': str,
-                'footer-message': str,
-                'dequeue-on-new-patchset': bool,
-                'ignore-dependencies': bool,
-                'disable-after-consecutive-failures':
-                    v.All(int, v.Range(min=1)),
-                'window': window,
-                'window-floor': window_floor,
-                'window-increase-type': window_type,
-                'window-increase-factor': window_factor,
-                'window-decrease-type': window_type,
-                'window-decrease-factor': window_factor,
-                }
-
-    project_template = {v.Required('name'): str}
-    project_templates = [project_template]
-
-    swift = {v.Required('name'): str,
-             'container': str,
-             'expiry': int,
-             'max_file_size': int,
-             'max-file-size': int,
-             'max_file_count': int,
-             'max-file-count': int,
-             'logserver_prefix': str,
-             'logserver-prefix': str,
-             }
-
-    skip_if = {'project': str,
-               'branch': str,
-               'all-files-match-any': toList(str),
-               }
-
-    job = {v.Required('name'): str,
-           'queue-name': str,
-           'failure-message': str,
-           'success-message': str,
-           'failure-pattern': str,
-           'success-pattern': str,
-           'hold-following-changes': bool,
-           'voting': bool,
-           'attempts': int,
-           'mutex': str,
-           'tags': toList(str),
-           'branch': toList(str),
-           'files': toList(str),
-           'swift': toList(swift),
-           'skip-if': toList(skip_if),
-           }
-    jobs = [job]
-
-    job_name = v.Schema(v.Match("^\S+$"))
-
-    def validateJob(self, value, path=[]):
-        if isinstance(value, list):
-            for (i, val) in enumerate(value):
-                self.validateJob(val, path + [i])
-        elif isinstance(value, dict):
-            for k, val in value.items():
-                self.validateJob(val, path + [k])
-        else:
-            self.job_name.schema(value)
-
-    def validateTemplateCalls(self, calls):
-        """ Verify a project pass the parameters required
-            by a project-template
-        """
-        for call in calls:
-            schema = self.templates_schemas[call.get('name')]
-            schema(call)
-
-    def collectFormatParam(self, tree):
-        """In a nested tree of string, dict and list, find out any named
-           parameters that might be used by str.format().  This is used to find
-           out whether projects are passing all the required parameters when
-           using a project template.
-
-            Returns a set() of all the named parameters found.
-        """
-        parameters = set()
-        if isinstance(tree, str):
-            # parse() returns a tuple of
-            # (literal_text, field_name, format_spec, conversion)
-            # We are just looking for field_name
-            parameters = set([t[1] for t in string.Formatter().parse(tree)
-                              if t[1] is not None])
-        elif isinstance(tree, list):
-            for item in tree:
-                parameters.update(self.collectFormatParam(item))
-        elif isinstance(tree, dict):
-            for item in tree:
-                parameters.update(self.collectFormatParam(tree[item]))
-
-        return parameters
-
-    def getDriverSchema(self, dtype, connections):
-        # TODO(jhesketh): Make the driver discovery dynamic
-        connection_drivers = {
-            'trigger': {
-                'gerrit': 'zuul.trigger.gerrit',
-            },
-            'reporter': {
-                'gerrit': 'zuul.reporter.gerrit',
-                'smtp': 'zuul.reporter.smtp',
-            },
-        }
-        standard_drivers = {
-            'trigger': {
-                'timer': 'zuul.trigger.timer',
-                'zuul': 'zuul.trigger.zuultrigger',
-            }
-        }
-
-        schema = {}
-        # Add the configured connections as available layout options
-        for connection_name, connection in connections.items():
-            for dname, dmod in connection_drivers.get(dtype, {}).items():
-                if connection.driver_name == dname:
-                    schema[connection_name] = toList(__import__(
-                        connection_drivers[dtype][dname],
-                        fromlist=['']).getSchema())
-
-        # Standard drivers are always available and don't require a unique
-        # (connection) name
-        for dname, dmod in standard_drivers.get(dtype, {}).items():
-            schema[dname] = toList(__import__(
-                standard_drivers[dtype][dname], fromlist=['']).getSchema())
-
-        return schema
-
-    def getSchema(self, data, connections=None):
-        if not isinstance(data, dict):
-            raise Exception("Malformed layout configuration: top-level type "
-                            "should be a dictionary")
-        pipelines = data.get('pipelines')
-        if not pipelines:
-            pipelines = []
-        pipelines = [p['name'] for p in pipelines if 'name' in p]
-
-        # Whenever a project uses a template, it better have to exist
-        project_templates = data.get('project-templates', [])
-        template_names = [t['name'] for t in project_templates
-                          if 'name' in t]
-
-        # A project using a template must pass all parameters to it.
-        # We first collect each templates parameters and craft a new
-        # schema for each of the template. That will later be used
-        # by validateTemplateCalls().
-        self.templates_schemas = {}
-        for t_name in template_names:
-            # Find out the parameters used inside each templates:
-            template = [t for t in project_templates
-                        if t['name'] == t_name]
-            template_parameters = self.collectFormatParam(template)
-
-            # Craft the templates schemas
-            schema = {v.Required('name'): v.Any(*template_names)}
-            for required_param in template_parameters:
-                # special case 'name' which will be automatically provided
-                if required_param == 'name':
-                    continue
-                # add this template parameters as requirements:
-                schema.update({v.Required(required_param): str})
-
-            # Register the schema for validateTemplateCalls()
-            self.templates_schemas[t_name] = v.Schema(schema)
-
-        project = {'name': str,
-                   'merge-mode': v.Any('merge', 'merge-resolve,',
-                                       'cherry-pick'),
-                   'template': self.validateTemplateCalls,
-                   }
-
-        # And project should refers to existing pipelines
-        for p in pipelines:
-            project[p] = self.validateJob
-        projects = [project]
-
-        # Sub schema to validate a project template has existing
-        # pipelines and jobs.
-        project_template = {'name': str}
-        for p in pipelines:
-            project_template[p] = self.validateJob
-        project_templates = [project_template]
-
-        # TODO(jhesketh): source schema is still defined above as sources
-        # currently aren't key/value so there is nothing to validate. Need to
-        # revisit this and figure out how to allow drivers with and without
-        # params. eg support all:
-        #   source: gerrit
-        # and
-        #   source:
-        #     gerrit:
-        #       - val
-        #       - val2
-        # and
-        #   source:
-        #     gerrit: something
-        # etc...
-        self.pipeline['trigger'] = v.Required(
-            self.getDriverSchema('trigger', connections))
-        for action in ['start', 'success', 'failure', 'merge-failure',
-                       'disabled']:
-            self.pipeline[action] = self.getDriverSchema('reporter',
-                                                         connections)
-
-        # Gather our sub schemas
-        schema = v.Schema({'includes': self.includes,
-                           v.Required('pipelines'): [self.pipeline],
-                           'jobs': self.jobs,
-                           'project-templates': project_templates,
-                           v.Required('projects'): projects,
-                           })
-        return schema
-
-
-class LayoutValidator(object):
-    def checkDuplicateNames(self, data, path):
-        items = []
-        for i, item in enumerate(data):
-            if item['name'] in items:
-                raise v.Invalid("Duplicate name: %s" % item['name'],
-                                path + [i])
-            items.append(item['name'])
-
-    def extraDriverValidation(self, dtype, driver_data, connections=None):
-        # Some drivers may have extra validation to run on the layout
-        # TODO(jhesketh): Make the driver discovery dynamic
-        connection_drivers = {
-            'trigger': {
-                'gerrit': 'zuul.trigger.gerrit',
-            },
-            'reporter': {
-                'gerrit': 'zuul.reporter.gerrit',
-                'smtp': 'zuul.reporter.smtp',
-            },
-        }
-        standard_drivers = {
-            'trigger': {
-                'timer': 'zuul.trigger.timer',
-                'zuul': 'zuul.trigger.zuultrigger',
-            }
-        }
-
-        for dname, d_conf in driver_data.items():
-            for connection_name, connection in connections.items():
-                if connection_name == dname:
-                    if (connection.driver_name in
-                        connection_drivers.get(dtype, {}).keys()):
-                        module = __import__(
-                            connection_drivers[dtype][connection.driver_name],
-                            fromlist=['']
-                        )
-                        if 'validate_conf' in dir(module):
-                            module.validate_conf(d_conf)
-                    break
-            if dname in standard_drivers.get(dtype, {}).keys():
-                module = __import__(standard_drivers[dtype][dname],
-                                    fromlist=[''])
-                if 'validate_conf' in dir(module):
-                    module.validate_conf(d_conf)
-
-    def validate(self, data, connections=None):
-        schema = LayoutSchema().getSchema(data, connections)
-        schema(data)
-        self.checkDuplicateNames(data['pipelines'], ['pipelines'])
-        if 'jobs' in data:
-            self.checkDuplicateNames(data['jobs'], ['jobs'])
-        self.checkDuplicateNames(data['projects'], ['projects'])
-        if 'project-templates' in data:
-            self.checkDuplicateNames(
-                data['project-templates'], ['project-templates'])
-
-        for pipeline in data['pipelines']:
-            self.extraDriverValidation('trigger', pipeline['trigger'],
-                                       connections)
-            for action in ['start', 'success', 'failure', 'merge-failure']:
-                if action in pipeline:
-                    self.extraDriverValidation('reporter', pipeline[action],
-                                               connections)
-
-
-class ConfigValidator(object):
-    def validate(self, data, connections=None):
-        schema = ConfigSchema().getSchema(data, connections)
-        schema(data)
diff --git a/zuul/lib/cloner.py b/zuul/lib/cloner.py
index 197c426..18dea91 100644
--- a/zuul/lib/cloner.py
+++ b/zuul/lib/cloner.py
@@ -46,6 +46,8 @@
         self.zuul_branch = zuul_branch or ''
         self.zuul_ref = zuul_ref or ''
         self.zuul_url = zuul_url
+        self.zuul_project = zuul_project
+
         self.project_branches = project_branches or {}
         self.project_revisions = {}
 
@@ -61,7 +63,7 @@
             raise Exception("Unable to read clone map file at %s." %
                             clone_map_file)
         clone_map_file = open(clone_map_file)
-        self.clone_map = yaml.load(clone_map_file).get('clonemap')
+        self.clone_map = yaml.safe_load(clone_map_file).get('clonemap')
         self.log.info("Loaded map containing %s rules", len(self.clone_map))
         return self.clone_map
 
@@ -77,7 +79,18 @@
     def cloneUpstream(self, project, dest):
         # Check for a cached git repo first
         git_cache = '%s/%s' % (self.cache_dir, project)
-        git_upstream = '%s/%s' % (self.git_url, project)
+
+        # Then, if we are cloning the repo for the zuul_project, then
+        # set its origin to be the zuul merger, as it is guaranteed to
+        # be correct and up to date even if mirrors haven't updated
+        # yet.  Otherwise, we can not be sure about the state of the
+        # project, so our best chance to get the most current state is
+        # by setting origin to the git_url.
+        if (self.zuul_url and project == self.zuul_project):
+            git_upstream = '%s/%s' % (self.zuul_url, project)
+        else:
+            git_upstream = '%s/%s' % (self.git_url, project)
+
         repo_is_cloned = os.path.exists(os.path.join(dest, '.git'))
         if (self.cache_dir and
             os.path.exists(git_cache) and
@@ -104,23 +117,35 @@
 
         return repo
 
-    def fetchFromZuul(self, repo, project, ref):
-        zuul_remote = '%s/%s' % (self.zuul_url, project)
+    def fetchRef(self, repo, project, ref):
+        # If we are fetching a zuul ref, the only place to get it is
+        # from the zuul merger (and it is guaranteed to be correct).
+        # Otherwise, the only way we can be certain that the ref
+        # (which, since it is not a zuul ref, is a branch or tag) is
+        # correct is in the case that it matches zuul_project.  If
+        # neither of those two conditions are met, we are most likely
+        # to get the correct state from the git_url.
+        if (ref.startswith('refs/zuul') or
+            project == self.zuul_project):
+
+            remote = '%s/%s' % (self.zuul_url, project)
+        else:
+            remote = '%s/%s' % (self.git_url, project)
 
         try:
-            repo.fetchFrom(zuul_remote, ref)
-            self.log.debug("Fetched ref %s from %s", ref, project)
+            repo.fetchFrom(remote, ref)
+            self.log.debug("Fetched ref %s from %s", ref, remote)
             return True
         except ValueError:
-            self.log.debug("Project %s in Zuul does not have ref %s",
-                           project, ref)
+            self.log.debug("Repo %s does not have ref %s",
+                           remote, ref)
             return False
         except GitCommandError as error:
             # Bail out if fetch fails due to infrastructure reasons
             if error.stderr.startswith('fatal: unable to access'):
                 raise
-            self.log.debug("Project %s in Zuul does not have ref %s",
-                           project, ref)
+            self.log.debug("Repo %s does not have ref %s",
+                           remote, ref)
             return False
 
     def prepareRepo(self, project, dest):
@@ -192,7 +217,7 @@
             self.log.info("Attempting to check out revision %s for "
                           "project %s", indicated_revision, project)
             try:
-                self.fetchFromZuul(repo, project, self.zuul_ref)
+                self.fetchRef(repo, project, self.zuul_ref)
                 commit = repo.checkout(indicated_revision)
             except (ValueError, GitCommandError):
                 raise exceptions.RevNotFound(project, indicated_revision)
@@ -201,10 +226,10 @@
         # If we have a non empty zuul_ref to use, use it. Otherwise we fall
         # back to checking out the branch.
         elif ((override_zuul_ref and
-              self.fetchFromZuul(repo, project, override_zuul_ref)) or
+              self.fetchRef(repo, project, override_zuul_ref)) or
               (fallback_zuul_ref and
                fallback_zuul_ref != override_zuul_ref and
-              self.fetchFromZuul(repo, project, fallback_zuul_ref))):
+              self.fetchRef(repo, project, fallback_zuul_ref))):
             # Work around a bug in GitPython which can not parse FETCH_HEAD
             gitcmd = git.Git(dest)
             fetch_head = gitcmd.rev_parse('FETCH_HEAD')
diff --git a/zuul/lib/connections.py b/zuul/lib/connections.py
index c8b61a9..27d8a1b 100644
--- a/zuul/lib/connections.py
+++ b/zuul/lib/connections.py
@@ -12,6 +12,7 @@
 # License for the specific language governing permissions and limitations
 # under the License.
 
+import logging
 import re
 
 import zuul.driver.zuul
@@ -29,6 +30,8 @@
 class ConnectionRegistry(object):
     """A registry of connections"""
 
+    log = logging.getLogger("zuul.ConnectionRegistry")
+
     def __init__(self):
         self.connections = {}
         self.drivers = {}
@@ -92,16 +95,26 @@
         # connection named 'gerrit' or 'smtp' respectfully
 
         if 'gerrit' in config.sections():
-            driver = self.drivers['gerrit']
-            connections['gerrit'] = \
-                driver.getConnection(
-                    'gerrit', dict(config.items('gerrit')))
+            if 'gerrit' in connections:
+                self.log.warning(
+                    "The legacy [gerrit] section will be ignored in favour"
+                    " of the [connection gerrit].")
+            else:
+                driver = self.drivers['gerrit']
+                connections['gerrit'] = \
+                    driver.getConnection(
+                        'gerrit', dict(config.items('gerrit')))
 
         if 'smtp' in config.sections():
-            driver = self.drivers['smtp']
-            connections['smtp'] = \
-                driver.getConnection(
-                    'smtp', dict(config.items('smtp')))
+            if 'smtp' in connections:
+                self.log.warning(
+                    "The legacy [smtp] section will be ignored in favour"
+                    " of the [connection smtp].")
+            else:
+                driver = self.drivers['smtp']
+                connections['smtp'] = \
+                    driver.getConnection(
+                        'smtp', dict(config.items('smtp')))
 
         # Create default connections for drivers which need no
         # connection information (e.g., 'timer' or 'zuul').
diff --git a/zuul/lib/swift.py b/zuul/lib/swift.py
deleted file mode 100644
index b5d3bc7..0000000
--- a/zuul/lib/swift.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright 2014 Rackspace Australia
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-#      http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import hmac
-from hashlib import sha1
-import logging
-from time import time
-import os
-import random
-import six
-from six.moves import urllib
-import string
-
-
-class Swift(object):
-    log = logging.getLogger("zuul.lib.swift")
-
-    def __init__(self, config):
-        self.config = config
-        self.connection = False
-        if self.config.has_option('swift', 'X-Account-Meta-Temp-Url-Key'):
-            self.secure_key = self.config.get('swift',
-                                              'X-Account-Meta-Temp-Url-Key')
-        else:
-            self.secure_key = ''.join(
-                random.choice(string.ascii_uppercase + string.digits)
-                for x in range(20)
-            )
-
-        self.storage_url = ''
-        if self.config.has_option('swift', 'X-Storage-Url'):
-            self.storage_url = self.config.get('swift', 'X-Storage-Url')
-
-        try:
-            if self.config.has_section('swift'):
-                if (not self.config.has_option('swift', 'Send-Temp-Url-Key')
-                    or self.config.getboolean('swift',
-                                              'Send-Temp-Url-Key')):
-                    self.connect()
-
-                    # Tell swift of our key
-                    headers = {}
-                    headers['X-Account-Meta-Temp-Url-Key'] = self.secure_key
-                    self.connection.post_account(headers)
-
-                if not self.config.has_option('swift', 'X-Storage-Url'):
-                    self.connect()
-                    self.storage_url = self.connection.get_auth()[0]
-        except Exception as e:
-            self.log.warning("Unable to set up swift. Signed storage URL is "
-                             "likely to be wrong. %s" % e)
-
-    def connect(self):
-        if not self.connection:
-            authurl = self.config.get('swift', 'authurl')
-
-            user = (self.config.get('swift', 'user')
-                    if self.config.has_option('swift', 'user') else None)
-            key = (self.config.get('swift', 'key')
-                   if self.config.has_option('swift', 'key') else None)
-            retries = (self.config.get('swift', 'retries')
-                       if self.config.has_option('swift', 'retries') else 5)
-            preauthurl = (self.config.get('swift', 'preauthurl')
-                          if self.config.has_option('swift', 'preauthurl')
-                          else None)
-            preauthtoken = (self.config.get('swift', 'preauthtoken')
-                            if self.config.has_option('swift', 'preauthtoken')
-                            else None)
-            snet = (self.config.get('swift', 'snet')
-                    if self.config.has_option('swift', 'snet') else False)
-            starting_backoff = (self.config.get('swift', 'starting_backoff')
-                                if self.config.has_option('swift',
-                                                          'starting_backoff')
-                                else 1)
-            max_backoff = (self.config.get('swift', 'max_backoff')
-                           if self.config.has_option('swift', 'max_backoff')
-                           else 64)
-            tenant_name = (self.config.get('swift', 'tenant_name')
-                           if self.config.has_option('swift', 'tenant_name')
-                           else None)
-            auth_version = (self.config.get('swift', 'auth_version')
-                            if self.config.has_option('swift', 'auth_version')
-                            else 2.0)
-            cacert = (self.config.get('swift', 'cacert')
-                      if self.config.has_option('swift', 'cacert') else None)
-            insecure = (self.config.get('swift', 'insecure')
-                        if self.config.has_option('swift', 'insecure')
-                        else False)
-            ssl_compression = (self.config.get('swift', 'ssl_compression')
-                               if self.config.has_option('swift',
-                                                         'ssl_compression')
-                               else True)
-
-            available_os_options = ['tenant_id', 'auth_token', 'service_type',
-                                    'endpoint_type', 'tenant_name',
-                                    'object_storage_url', 'region_name']
-
-            os_options = {}
-            for os_option in available_os_options:
-                if self.config.has_option('swift', os_option):
-                    os_options[os_option] = self.config.get('swift', os_option)
-
-            import swiftclient
-            self.connection = swiftclient.client.Connection(
-                authurl=authurl, user=user, key=key, retries=retries,
-                preauthurl=preauthurl, preauthtoken=preauthtoken, snet=snet,
-                starting_backoff=starting_backoff, max_backoff=max_backoff,
-                tenant_name=tenant_name, os_options=os_options,
-                auth_version=auth_version, cacert=cacert, insecure=insecure,
-                ssl_compression=ssl_compression)
-
-    def generate_form_post_middleware_params(self, destination_prefix='',
-                                             **kwargs):
-        """Generate the FormPost middleware params for the given settings"""
-
-        # Define the available settings and their defaults
-        settings = {
-            'container': '',
-            'expiry': 7200,
-            'max_file_size': 104857600,
-            'max_file_count': 10,
-            'file_path_prefix': ''
-        }
-
-        for key, default in six.iteritems(settings):
-            # TODO(jeblair): Remove the following two lines after a
-            # deprecation period for the underscore variants of the
-            # settings in YAML.
-            if key in kwargs:
-                settings[key] = kwargs[key]
-            # Since we prefer '-' rather than '_' in YAML, look up
-            # keys there using hyphens.  Continue to use underscores
-            # everywhere else.
-            altkey = key.replace('_', '-')
-            if altkey in kwargs:
-                settings[key] = kwargs[altkey]
-            elif self.config.has_option('swift', 'default_' + key):
-                settings[key] = self.config.get('swift', 'default_' + key)
-            # TODO: these are always strings; some should be converted
-            # to ints.
-
-        expires = int(time() + int(settings['expiry']))
-        redirect = ''
-
-        url = os.path.join(self.storage_url, settings['container'],
-                           settings['file_path_prefix'],
-                           destination_prefix)
-        u = urllib.parse.urlparse(url)
-
-        hmac_body = '%s\n%s\n%s\n%s\n%s' % (u.path, redirect,
-                                            settings['max_file_size'],
-                                            settings['max_file_count'],
-                                            expires)
-
-        signature = hmac.new(self.secure_key, hmac_body, sha1).hexdigest()
-
-        return url, hmac_body, signature
diff --git a/zuul/manager/__init__.py b/zuul/manager/__init__.py
index 4447615..85b9723 100644
--- a/zuul/manager/__init__.py
+++ b/zuul/manager/__init__.py
@@ -91,11 +91,13 @@
                 log_jobs(x, indent + 2)
 
         for project_name in layout.project_configs.keys():
-            project = self.pipeline.source.getProject(project_name)
-            tree = self.pipeline.getJobTree(project)
-            if tree:
-                self.log.info("    %s" % project)
-                log_jobs(tree)
+            project_config = layout.project_configs.get(project_name)
+            if project_config:
+                project_pipeline_config = project_config.pipelines.get(
+                    self.pipeline.name)
+                if project_pipeline_config:
+                    self.log.info("    %s" % project_name)
+                    log_jobs(project_pipeline_config.job_tree)
         self.log.info("  On start:")
         self.log.info("    %s" % self.pipeline.start_actions)
         self.log.info("  On success:")
@@ -466,6 +468,43 @@
                     newrev=newrev,
                     )
 
+    def _loadDynamicLayout(self, item):
+        # Load layout
+        # Late import to break an import loop
+        import zuul.configloader
+        loader = zuul.configloader.ConfigLoader()
+
+        build_set = item.current_build_set
+        self.log.debug("Load dynamic layout with %s" % build_set.files)
+        try:
+            # First parse the config as it will land with the
+            # full set of config and project repos.  This lets us
+            # catch syntax errors in config repos even though we won't
+            # actually run with that config.
+            loader.createDynamicLayout(
+                item.pipeline.layout.tenant,
+                build_set.files,
+                include_config_repos=True)
+
+            # Then create the config a second time but without changes
+            # to config repos so that we actually use this config.
+            layout = loader.createDynamicLayout(
+                item.pipeline.layout.tenant,
+                build_set.files,
+                include_config_repos=False)
+        except zuul.configloader.ConfigurationSyntaxError as e:
+            self.log.info("Configuration syntax error "
+                          "in dynamic layout %s" %
+                          build_set.files)
+            item.setConfigError(str(e))
+            return None
+        except Exception:
+            self.log.exception("Error in dynamic layout %s" %
+                               build_set.files)
+            item.setConfigError("Unknown configuration error")
+            return None
+        return layout
+
     def getLayout(self, item):
         if not item.change.updatesConfig():
             if item.item_ahead:
@@ -479,27 +518,7 @@
         if build_set.merge_state == build_set.COMPLETE:
             if build_set.unable_to_merge:
                 return None
-            # Load layout
-            # Late import to break an import loop
-            import zuul.configloader
-            loader = zuul.configloader.ConfigLoader()
-            self.log.debug("Load dynamic layout with %s" % build_set.files)
-            try:
-                layout = loader.createDynamicLayout(
-                    item.pipeline.layout.tenant,
-                    build_set.files)
-            except zuul.configloader.ConfigurationSyntaxError as e:
-                self.log.info("Configuration syntax error "
-                              "in dynamic layout %s" %
-                              build_set.files)
-                item.setConfigError(str(e))
-                return None
-            except Exception:
-                self.log.exception("Error in dynamic layout %s" %
-                                   build_set.files)
-                item.setConfigError("Unknown configuration error")
-                return None
-            return layout
+            return self._loadDynamicLayout(item)
         build_set.merge_state = build_set.PENDING
         self.log.debug("Preparing dynamic layout for: %s" % item.change)
         dependent_items = self.getDependentItems(item)
@@ -508,7 +527,7 @@
         merger_items = map(self._makeMergerItem, all_items)
         self.sched.merger.mergeChanges(merger_items,
                                        item.current_build_set,
-                                       ['.zuul.yaml'],
+                                       ['zuul.yaml', '.zuul.yaml'],
                                        self.pipeline.precedence)
 
     def prepareLayout(self, item):
diff --git a/zuul/manager/dependent.py b/zuul/manager/dependent.py
index 3d006c2..f5fa579 100644
--- a/zuul/manager/dependent.py
+++ b/zuul/manager/dependent.py
@@ -39,10 +39,12 @@
         change_queues = {}
         project_configs = self.pipeline.layout.project_configs
 
-        for project in self.pipeline.getProjects():
-            project_config = project_configs[project.name]
-            project_pipeline_config = project_config.pipelines[
-                self.pipeline.name]
+        for project_config in project_configs.values():
+            project_pipeline_config = project_config.pipelines.get(
+                self.pipeline.name)
+            if project_pipeline_config is None:
+                continue
+            project = self.pipeline.source.getProject(project_config.name)
             queue_name = project_pipeline_config.queue_name
             if queue_name and queue_name in change_queues:
                 change_queue = change_queues[queue_name]
diff --git a/zuul/merger/merger.py b/zuul/merger/merger.py
index 658fd64..d07a95b 100644
--- a/zuul/merger/merger.py
+++ b/zuul/merger/merger.py
@@ -226,6 +226,14 @@
         else:
             return None
 
+    def _setGitSsh(self, connection_name):
+        wrapper_name = '.ssh_wrapper_%s' % connection_name
+        name = os.path.join(self.working_root, wrapper_name)
+        if os.path.isfile(name):
+            os.environ['GIT_SSH'] = name
+        elif 'GIT_SSH' in os.environ:
+            del os.environ['GIT_SSH']
+
     def addProject(self, project, url):
         repo = None
         try:
@@ -246,6 +254,10 @@
         return self.addProject(project, url)
 
     def updateRepo(self, project, url):
+        # TODOv3(jhesketh): Reimplement
+        # da90a50b794f18f74de0e2c7ec3210abf79dda24 after merge..
+        # Likely we'll handle connection context per projects differently.
+        # self._setGitSsh()
         repo = self.getRepo(project, url)
         try:
             self.log.info("Updating local repository %s", project)
diff --git a/zuul/merger/server.py b/zuul/merger/server.py
index ecce2cf..c2738a2 100644
--- a/zuul/merger/server.py
+++ b/zuul/merger/server.py
@@ -100,6 +100,8 @@
                 except Exception:
                     self.log.exception("Exception while running job")
                     job.sendWorkException(traceback.format_exc())
+            except gear.InterruptedError:
+                return
             except Exception:
                 self.log.exception("Exception while getting job")
 
@@ -116,7 +118,8 @@
 
     def update(self, job):
         args = json.loads(job.arguments)
-        self.merger.updateRepo(args['project'], args['url'])
+        self.merger.updateRepo(args['project'],
+                               args['url'])
         result = dict(updated=True,
                       zuul_url=self.zuul_url)
         job.sendWorkComplete(json.dumps(result))
diff --git a/zuul/model.py b/zuul/model.py
index 19931ea..9118fd4 100644
--- a/zuul/model.py
+++ b/zuul/model.py
@@ -123,7 +123,6 @@
         self.start_message = None
         self.dequeue_on_new_patchset = True
         self.ignore_dependencies = False
-        self.job_trees = {}  # project -> JobTree
         self.manager = None
         self.queues = []
         self.precedence = PRECEDENCE_NORMAL
@@ -160,13 +159,6 @@
     def setManager(self, manager):
         self.manager = manager
 
-    def getProjects(self):
-        # cmp is not in python3, applied idiom from
-        # http://python-future.org/compatible_idioms.html#cmp
-        return sorted(
-            self.job_trees.keys(),
-            key=lambda p: p.name)
-
     def addQueue(self, queue):
         self.queues.append(queue)
 
@@ -179,10 +171,6 @@
     def removeQueue(self, queue):
         self.queues.remove(queue)
 
-    def getJobTree(self, project):
-        tree = self.job_trees.get(project)
-        return tree
-
     def getChangesInQueue(self):
         changes = []
         for shared_queue in self.queues:
@@ -317,12 +305,6 @@
             item.item_ahead.items_behind.append(item)
         return True
 
-    def mergeChangeQueue(self, other):
-        for project in other.projects:
-            self.addProject(project)
-        self.window = min(self.window, other.window)
-        # TODO merge semantics
-
     def isActionable(self, item):
         if self.window:
             return item in self.queue[:self.window]
@@ -484,7 +466,8 @@
 class NodeRequest(object):
     """A request for a set of nodes."""
 
-    def __init__(self, build_set, job, nodeset):
+    def __init__(self, requestor, build_set, job, nodeset):
+        self.requestor = requestor
         self.build_set = build_set
         self.job = job
         self.nodeset = nodeset
@@ -519,7 +502,7 @@
         d = {}
         nodes = [n.image for n in self.nodeset.getNodes()]
         d['node_types'] = nodes
-        d['requestor'] = 'zuul'  # TODOv3(jeblair): better descriptor
+        d['requestor'] = self.requestor
         d['state'] = self.state
         d['state_time'] = self.state_time
         return d
@@ -535,21 +518,25 @@
     Jobs and playbooks reference this to keep track of where they
     originate."""
 
-    def __init__(self, project, branch, trusted):
+    def __init__(self, project, branch, path, trusted):
         self.project = project
         self.branch = branch
+        self.path = path
         self.trusted = trusted
 
+    def __str__(self):
+        return '%s/%s@%s' % (self.project, self.path, self.branch)
+
     def __repr__(self):
-        return '<SourceContext %s:%s trusted:%s>' % (self.project,
-                                                     self.branch,
-                                                     self.trusted)
+        return '<SourceContext %s trusted:%s>' % (str(self),
+                                                  self.trusted)
 
     def __deepcopy__(self, memo):
         return self.copy()
 
     def copy(self):
-        return self.__class__(self.project, self.branch, self.trusted)
+        return self.__class__(self.project, self.branch, self.path,
+                              self.trusted)
 
     def __ne__(self, other):
         return not self.__eq__(other)
@@ -559,6 +546,7 @@
             return False
         return (self.project == other.project and
                 self.branch == other.branch and
+                self.path == other.path and
                 self.trusted == other.trusted)
 
 
@@ -2165,18 +2153,8 @@
     def addProjectTemplate(self, project_template):
         self.project_templates[project_template.name] = project_template
 
-    def addProjectConfig(self, project_config, update_pipeline=True):
+    def addProjectConfig(self, project_config):
         self.project_configs[project_config.name] = project_config
-        # TODOv3(jeblair): tidy up the relationship between pipelines
-        # and projects and projectconfigs.  Specifically, move
-        # job_trees out of the pipeline since they are more dynamic
-        # than pipelines.  Remove the update_pipeline argument
-        if not update_pipeline:
-            return
-        for pipeline_name, pipeline_config in project_config.pipelines.items():
-            pipeline = self.pipelines[pipeline_name]
-            project = pipeline.source.getProject(project_config.name)
-            pipeline.job_trees[project] = pipeline_config.job_tree
 
     def _createJobTree(self, change, job_trees, parent):
         for tree in job_trees:
diff --git a/zuul/nodepool.py b/zuul/nodepool.py
index d116a2b..e94b950 100644
--- a/zuul/nodepool.py
+++ b/zuul/nodepool.py
@@ -26,7 +26,7 @@
         # Create a copy of the nodeset to represent the actual nodes
         # returned by nodepool.
         nodeset = job.nodeset.copy()
-        req = model.NodeRequest(build_set, job, nodeset)
+        req = model.NodeRequest(self.sched.hostname, build_set, job, nodeset)
         self.requests[req.uid] = req
 
         self.sched.zk.submitNodeRequest(req, self._updateNodeRequest)
diff --git a/zuul/reporter/__init__.py b/zuul/reporter/__init__.py
index 541f259..6df3f1b 100644
--- a/zuul/reporter/__init__.py
+++ b/zuul/reporter/__init__.py
@@ -63,24 +63,26 @@
 
     # TODOv3(jeblair): Consider removing pipeline argument in favor of
     # item.pipeline
-    def _formatItemReport(self, pipeline, item):
+    def _formatItemReport(self, pipeline, item, with_jobs=True):
         """Format a report from the given items. Usually to provide results to
         a reporter taking free-form text."""
-        ret = self._getFormatter()(pipeline, item)
+        ret = self._getFormatter()(pipeline, item, with_jobs)
 
         if pipeline.footer_message:
             ret += '\n' + pipeline.footer_message
 
         return ret
 
-    def _formatItemReportStart(self, pipeline, item):
+    def _formatItemReportStart(self, pipeline, item, with_jobs=True):
         return pipeline.start_message.format(pipeline=pipeline)
 
-    def _formatItemReportSuccess(self, pipeline, item):
-        return (pipeline.success_message + '\n\n' +
-                self._formatItemReportJobs(pipeline, item))
+    def _formatItemReportSuccess(self, pipeline, item, with_jobs=True):
+        msg = pipeline.success_message
+        if with_jobs:
+            msg += '\n\n' + self._formatItemReportJobs(pipeline, item)
+        return msg
 
-    def _formatItemReportFailure(self, pipeline, item):
+    def _formatItemReportFailure(self, pipeline, item, with_jobs=True):
         if item.dequeued_needing_change:
             msg = 'This change depends on a change that failed to merge.\n'
         elif item.didMergerFail():
@@ -88,14 +90,15 @@
         elif item.getConfigError():
             msg = item.getConfigError()
         else:
-            msg = (pipeline.failure_message + '\n\n' +
-                   self._formatItemReportJobs(pipeline, item))
+            msg = pipeline.failure_message
+            if with_jobs:
+                msg += '\n\n' + self._formatItemReportJobs(pipeline, item)
         return msg
 
-    def _formatItemReportMergeFailure(self, pipeline, item):
+    def _formatItemReportMergeFailure(self, pipeline, item, with_jobs=True):
         return pipeline.merge_failure_message
 
-    def _formatItemReportDisabled(self, pipeline, item):
+    def _formatItemReportDisabled(self, pipeline, item, with_jobs=True):
         if item.current_build_set.result == 'SUCCESS':
             return self._formatItemReportSuccess(pipeline, item)
         elif item.current_build_set.result == 'FAILURE':
diff --git a/zuul/reporter/sql.py b/zuul/reporter/sql.py
new file mode 100644
index 0000000..b663a59
--- /dev/null
+++ b/zuul/reporter/sql.py
@@ -0,0 +1,94 @@
+# Copyright 2015 Rackspace Australia
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import datetime
+import logging
+import voluptuous as v
+
+from zuul.reporter import BaseReporter
+
+
+class SQLReporter(BaseReporter):
+    """Sends off reports to a database."""
+
+    name = 'sql'
+    log = logging.getLogger("zuul.reporter.mysql.SQLReporter")
+
+    def __init__(self, reporter_config={}, sched=None, connection=None):
+        super(SQLReporter, self).__init__(
+            reporter_config, sched, connection)
+        self.result_score = reporter_config.get('score', None)
+
+    def report(self, source, pipeline, item):
+        """Create an entry into a database."""
+
+        if not self.connection.tables_established:
+            self.log.warn("SQL reporter (%s) is disabled " % self)
+            return
+
+        if self.sched.config.has_option('zuul', 'url_pattern'):
+            url_pattern = self.sched.config.get('zuul', 'url_pattern')
+        else:
+            url_pattern = None
+
+        score = self.reporter_config['score']\
+            if 'score' in self.reporter_config else 0
+
+        with self.connection.engine.begin() as conn:
+            buildset_ins = self.connection.zuul_buildset_table.insert().values(
+                zuul_ref=item.current_build_set.ref,
+                pipeline=item.pipeline.name,
+                project=item.change.project.name,
+                change=item.change.number,
+                patchset=item.change.patchset,
+                ref=item.change.refspec,
+                score=score,
+                message=self._formatItemReport(
+                    pipeline, item, with_jobs=False),
+            )
+            buildset_ins_result = conn.execute(buildset_ins)
+            build_inserts = []
+
+            for job in pipeline.getJobs(item):
+                build = item.current_build_set.getBuild(job.name)
+                if not build:
+                    # build hasn't began. The sql reporter can only send back
+                    # stats about builds. It doesn't understand how to store
+                    # information about the change.
+                    continue
+
+                (result, url) = item.formatJobResult(job, url_pattern)
+
+                build_inserts.append({
+                    'buildset_id': buildset_ins_result.inserted_primary_key,
+                    'uuid': build.uuid,
+                    'job_name': build.job.name,
+                    'result': result,
+                    'start_time': datetime.datetime.fromtimestamp(
+                        build.start_time),
+                    'end_time': datetime.datetime.fromtimestamp(
+                        build.end_time),
+                    'voting': build.job.voting,
+                    'log_url': url,
+                    'node_name': build.node_name,
+                })
+            conn.execute(self.connection.zuul_build_table.insert(),
+                         build_inserts)
+
+
+def getSchema():
+    sql_reporter = v.Schema({
+        'score': int,
+    })
+    return sql_reporter
diff --git a/zuul/rpclistener.py b/zuul/rpclistener.py
index c780df4..0fb557c 100644
--- a/zuul/rpclistener.py
+++ b/zuul/rpclistener.py
@@ -81,6 +81,8 @@
                         job.sendWorkFail()
                 else:
                     job.sendWorkFail()
+            except gear.InterruptedError:
+                return
             except Exception:
                 self.log.exception("Exception while getting job")
 
diff --git a/zuul/scheduler.py b/zuul/scheduler.py
index 2679522..8eab545 100644
--- a/zuul/scheduler.py
+++ b/zuul/scheduler.py
@@ -22,6 +22,7 @@
 import pickle
 import six
 from six.moves import queue as Queue
+import socket
 import sys
 import threading
 import time
@@ -256,6 +257,7 @@
     def __init__(self, config, testonly=False):
         threading.Thread.__init__(self)
         self.daemon = True
+        self.hostname = socket.gethostname()
         self.wake_event = threading.Event()
         self.layout_lock = threading.Lock()
         self.run_handler_lock = threading.Lock()