Merge "gerrit: Allow sending data with ssh command"
diff --git a/NEWS.rst b/NEWS.rst
index bd09bfe..5fef40a 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -33,7 +33,7 @@
   matches all jobs).
 
 * Multiple triggers are now supported (currently Gerrit and a simple
-  Timer trigger ar supported).  Your layout.yaml file will need to
+  Timer trigger are supported).  Your layout.yaml file will need to
   change to add the key "gerrit:" inside of the "triggers:" list to
   specify a Gerrit trigger (and facilitate adding other kinds of
   triggers later).  See the sample layout.yaml and Zuul section of the
diff --git a/doc/source/cloner.rst b/doc/source/cloner.rst
index 2ddf0b5..72b0cb9 100644
--- a/doc/source/cloner.rst
+++ b/doc/source/cloner.rst
@@ -61,6 +61,23 @@
 
 .. program-output:: zuul-cloner --help
 
+
+Ref lookup order
+''''''''''''''''
+
+The Zuul cloner will attempt to lookup references in this order:
+
+ 1) Zuul reference for the indicated branch
+ 2) Zuul reference for the master branch
+ 3) The tip of the indicated branch
+ 4) The tip of the master branch
+
+The "indicated branch" is one of the following:
+
+ A) The project-specific override branch (from project_branches arg)
+ B) The user specified branch (from the branch arg)
+ C) ZUUL_BRANCH (from the zuul_branch arg)
+
 Clone order
 -----------
 
diff --git a/doc/source/gating.rst b/doc/source/gating.rst
index 43a5928..fff2924 100644
--- a/doc/source/gating.rst
+++ b/doc/source/gating.rst
@@ -212,19 +212,20 @@
   }
 
 
-Cross projects dependencies
----------------------------
+Cross Project Testing
+---------------------
 
 When your projects are closely coupled together, you want to make sure
 changes entering the gate are going to be tested with the version of
 other projects currently enqueued in the gate (since they will
 eventually be merged and might introduce breaking features).
 
-Such dependencies can be defined in Zuul configuration by registering a job
-in a DependentPipeline of several projects. Whenever a change enters such a
-pipeline, it will create references for the other projects as well.  As an
-example, given a main project ``acme`` and a plugin ``plugin`` you can
-define a job ``acme-tests`` which should be run for both projects:
+Such relationships can be defined in Zuul configuration by registering
+a job in a DependentPipeline of several projects. Whenever a change
+enters such a pipeline, it will create references for the other
+projects as well.  As an example, given a main project ``acme`` and a
+plugin ``plugin`` you can define a job ``acme-tests`` which should be
+run for both projects:
 
 .. code-block:: yaml
 
@@ -280,3 +281,82 @@
 When your job fetches several repositories without changes ahead in the
 queue, they may not have a Z reference in which case you can just check
 out the branch.
+
+
+Cross Repository Dependencies
+-----------------------------
+
+Zuul permits users to specify dependencies across repositories.  Using
+a special header in Git commit messages, Users may specify that a
+change depends on another change in any repository known to Zuul.
+
+Zuul's cross-repository dependencies (CRD) behave like a directed
+acyclic graph (DAG), like git itself, to indicate a one-way dependency
+relationship between changes in different git repositories.  Change A
+may depend on B, but B may not depend on A.
+
+To use them, include "Depends-On: <gerrit-change-id>" in the footer of
+a commit message.  Use the full Change-ID ('I' + 40 characters).
+
+
+Gate Pipeline
+~~~~~~~~~~~~~
+
+When Zuul sees CRD changes, it serializes them in the usual manner when
+enqueuing them into a pipeline.  This means that if change A depends on
+B, then when they are added to the gate pipeline, B will appear first
+and A will follow.  If tests for B fail, both B and A will be removed
+from the pipeline, and it will not be possible for A to merge until B
+does.
+
+Note that if changes with CRD do not share a change queue then Zuul
+is unable to enqueue them together, and the first will be required to
+merge before the second is enqueued.
+
+Check Pipeline
+~~~~~~~~~~~~~~
+
+When changes are enqueued into the check pipeline, all of the related
+dependencies (both normal git-dependencies that come from parent commits
+as well as CRD changes) appear in a dependency graph, as in gate.  This
+means that even in the check pipeline, your change will be tested with
+its dependency.  So changes that were previously unable to be fully
+tested until a related change landed in a different repo may now be
+tested together from the start.
+
+All of the changes are still independent (so you will note that the
+whole pipeline does not share a graph as in gate), but for each change
+tested, all of its dependencies are visually connected to it, and they
+are used to construct the git references that Zuul uses when testing.
+When looking at this graph on the status page, you will note that the
+dependencies show up as grey dots, while the actual change tested shows
+up as red or green.  This is to indicate that the grey changes are only
+there to establish dependencies.  Even if one of the dependencies is
+also being tested, it will show up as a grey dot when used as a
+dependency, but separately and additionally will appear as its own red
+or green dot for its test.
+
+Multiple Changes
+~~~~~~~~~~~~~~~~
+
+A Gerrit change ID may refer to multiple changes (on multiple branches
+of the same project, or even multiple projects).  In these cases, Zuul
+will treat all of the changes with that change ID as dependencies.  So
+if you say that change in project A Depends-On a change ID that has
+changes in two branches of project B, then when testing the change to
+project A, both project B changes will be applied, and when deciding
+whether the project A change can merge, both changes must merge ahead
+of it.
+
+A change may depend on more than one Gerrit change ID as well.  So it
+is possible for a change in project A to depend on a change in project
+B and a change in project C.  Simply add more "Depends-On:" lines to
+the footer.
+
+Cycles
+~~~~~~
+
+If a cycle is created by use of CRD, Zuul will abort its work very
+early.  There will be no message in Gerrit and no changes that are part
+of the cycle will be enqueued into any pipeline.  This is to protect
+Zuul from infinite loops.
diff --git a/doc/source/launchers.rst b/doc/source/launchers.rst
index b95354f..0a1e0e7 100644
--- a/doc/source/launchers.rst
+++ b/doc/source/launchers.rst
@@ -66,6 +66,11 @@
 **LOG_PATH**
   zuul also suggests a unique path for logs to the worker. This is
   "BASE_LOG_PATH/pipeline-name/job-name/uuid"
+**ZUUL_VOTING**
+  Whether Zuul considers this job voting or not.  Note that if Zuul is
+  reconfigured during the run, the voting status of a job may change
+  and this value will be out of date.  Values are '1' if voting, '0'
+  otherwise.
 
 Change related parameters
 ~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/zuul.rst b/doc/source/zuul.rst
index 7c3e384..d597f23 100644
--- a/doc/source/zuul.rst
+++ b/doc/source/zuul.rst
@@ -56,6 +56,10 @@
   Whether to start the internal Gearman server (default: False).
   ``start=true``
 
+**listen_address**
+  IP address or domain name on which to listen (default: all addresses).
+  ``listen_address=127.0.0.1``
+
 **log_config**
   Path to log config file for internal Gearman server.
   ``log_config=/etc/zuul/gearman-logging.yaml``
@@ -422,6 +426,12 @@
     provides.  This field is treated as a regular expression, and
     multiple refs may be listed.
 
+    *ignore-deletes*
+    When a branch is deleted, a ref-updated event is emitted with a newrev
+    of all zeros specified. The ``ignore-deletes`` field is a boolean value
+    that describes whether or not these newrevs trigger ref-updated events.
+    The default is True, which will not trigger ref-updated events.
+
     *approval*
     This is only used for ``comment-added`` events.  It only matches if
     the event has a matching approval associated with it.  Example:
@@ -603,6 +613,18 @@
   do when a change is added to the pipeline manager.  This can be used,
   for example, to reset the value of the Verified review category.
 
+**disabled**
+  Uses the same syntax as **success**, but describes what Zuul should
+  do when a pipeline is disabled.
+  See ``disable-after-consecutive-failures``.
+
+**disable-after-consecutive-failures**
+  If set, a pipeline can enter a ''disabled'' state if too many changes
+  in a row fail. When this value is exceeded the pipeline will stop
+  reporting to any of the ``success``, ``failure`` or ``merge-failure``
+  reporters and instead only report to the ``disabled`` reporters.
+  (No ``start`` reports are made when a pipeline is disabled).
+
 **precedence**
   Indicates how the build scheduler should prioritize jobs for
   different pipelines.  Each pipeline may have one precedence, jobs
@@ -924,9 +946,10 @@
 whether a change merges cleanly::
 
   - name: ^.*-merge$
-    failure-message: This change was unable to be automatically merged
-    with the current state of the repository. Please rebase your
-    change and upload a new patchset.
+    failure-message: This change or one of its cross-repo dependencies
+    was unable to be automatically merged with the current state of
+    its repository. Please rebase the change and upload a new
+    patchset.
 
 Projects
 """"""""
@@ -1037,7 +1060,7 @@
       - foobar-extra-special-job
 
 Individual jobs may optionally be added to pipelines (e.g. check,
-gate, et cetera) for a project, in addtion to those provided by
+gate, et cetera) for a project, in addition to those provided by
 templates.
 
 The order of the jobs listed in the project (which only affects the
diff --git a/etc/layout.yaml-sample b/etc/layout.yaml-sample
index 30a3352..53f6ba1 100644
--- a/etc/layout.yaml-sample
+++ b/etc/layout.yaml-sample
@@ -30,6 +30,7 @@
       gerrit:
         - event: ref-updated
           ref: ^(?!refs/).*$
+          ignore-deletes: False
 
   - name: gate
     manager: DependentPipelineManager
diff --git a/etc/status/public_html/jquery.zuul.js b/etc/status/public_html/jquery.zuul.js
index 1db3c8e..0ca2718 100644
--- a/etc/status/public_html/jquery.zuul.js
+++ b/etc/status/public_html/jquery.zuul.js
@@ -289,10 +289,10 @@
                 }
 
                 var $change_progress_row_left = $('<div />')
-                    .addClass('col-xs-3')
+                    .addClass('col-xs-4')
                     .append($change_link);
                 var $change_progress_row_right = $('<div />')
-                    .addClass('col-xs-9')
+                    .addClass('col-xs-8')
                     .append(this.change_total_progress_bar(change));
 
                 var $change_progress_row = $('<div />')
diff --git a/etc/status/public_html/styles/zuul.css b/etc/status/public_html/styles/zuul.css
index e833f4b..44fd737 100644
--- a/etc/status/public_html/styles/zuul.css
+++ b/etc/status/public_html/styles/zuul.css
@@ -16,7 +16,9 @@
 .zuul-change-total-result {
     height: 10px;
     width: 100px;
-    margin: 5px 0 0 0;
+    margin: 0;
+    display: inline-block;
+    vertical-align: middle;
 }
 
 .zuul-spinner,
diff --git a/test-requirements.txt b/test-requirements.txt
index c68b2db..4ae3eb3 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -2,6 +2,11 @@
 
 coverage>=3.6
 sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
+# NOTE(tonyb) Pillow isn't directly needed but it's pulled in via
+# Collecting Pillow (from blockdiag>=1.5.0->sphinxcontrib-blockdiag>=0.5.5
+# So cap as per global-requirements until https://launchpad.net/bugs/1501995
+# is properly fixed
+Pillow>=2.4.0,<3.0.0 # MIT
 sphinxcontrib-blockdiag>=0.5.5
 discover
 fixtures>=0.3.14
diff --git a/tests/base.py b/tests/base.py
index 535bb7f..abbdb0a 100755
--- a/tests/base.py
+++ b/tests/base.py
@@ -47,8 +47,9 @@
 import zuul.rpclistener
 import zuul.launcher.gearman
 import zuul.lib.swift
-import zuul.merger.server
 import zuul.merger.client
+import zuul.merger.merger
+import zuul.merger.server
 import zuul.reporter.gerrit
 import zuul.reporter.smtp
 import zuul.trigger.gerrit
@@ -145,7 +146,7 @@
                                                         self.latest_patchset),
                                      'refs/tags/init')
         repo.head.reference = ref
-        repo.head.reset(index=True, working_tree=True)
+        zuul.merger.merger.reset_repo_to_head(repo)
         repo.git.clean('-x', '-f', '-d')
 
         path = os.path.join(self.upstream_root, self.project)
@@ -167,7 +168,7 @@
 
         r = repo.index.commit(msg)
         repo.head.reference = 'master'
-        repo.head.reset(index=True, working_tree=True)
+        zuul.merger.merger.reset_repo_to_head(repo)
         repo.git.clean('-x', '-f', '-d')
         repo.heads['master'].checkout()
         return r
@@ -258,8 +259,8 @@
                  "comment": "This is a comment"}
         return event
 
-    def addApproval(self, category, value, username='jenkins',
-                    granted_on=None):
+    def addApproval(self, category, value, username='reviewer_john',
+                    granted_on=None, message=''):
         if not granted_on:
             granted_on = time.time()
         approval = {
@@ -277,20 +278,20 @@
                 del self.patchsets[-1]['approvals'][i]
         self.patchsets[-1]['approvals'].append(approval)
         event = {'approvals': [approval],
-                 'author': {'email': 'user@example.com',
-                            'name': 'User Name',
-                            'username': 'username'},
+                 'author': {'email': 'author@example.com',
+                            'name': 'Patchset Author',
+                            'username': 'author_phil'},
                  'change': {'branch': self.branch,
                             'id': 'Iaa69c46accf97d0598111724a38250ae76a22c87',
                             'number': str(self.number),
-                            'owner': {'email': 'user@example.com',
-                                      'name': 'User Name',
-                                      'username': 'username'},
+                            'owner': {'email': 'owner@example.com',
+                                      'name': 'Change Owner',
+                                      'username': 'owner_jane'},
                             'project': self.project,
                             'subject': self.subject,
                             'topic': 'master',
                             'url': 'https://hostname/459'},
-                 'comment': '',
+                 'comment': message,
                  'patchSet': self.patchsets[-1],
                  'type': 'comment-added'}
         self.data['submitRecords'] = self.getSubmitRecords()
@@ -380,11 +381,16 @@
 class FakeGerrit(object):
     log = logging.getLogger("zuul.test.FakeGerrit")
 
-    def __init__(self, *args, **kw):
+    def __init__(self, hostname, username, port=29418, keyfile=None,
+                 changes_dbs={}):
+        self.hostname = hostname
+        self.username = username
+        self.port = port
+        self.keyfile = keyfile
         self.event_queue = Queue.Queue()
         self.fixture_dir = os.path.join(FIXTURE_DIR, 'gerrit')
         self.change_number = 0
-        self.changes = {}
+        self.changes = changes_dbs.get(hostname, {})
         self.queries = []
 
     def addFakeChange(self, project, branch, subject, status='NEW'):
@@ -407,7 +413,27 @@
     def review(self, project, changeid, message, action):
         number, ps = changeid.split(',')
         change = self.changes[int(number)]
+
+        # Add the approval back onto the change (ie simulate what gerrit would
+        # do).
+        # Usually when zuul leaves a review it'll create a feedback loop where
+        # zuul's review enters another gerrit event (which is then picked up by
+        # zuul). However, we can't mimic this behaviour (by adding this
+        # approval event into the queue) as it stops jobs from checking what
+        # happens before this event is triggered. If a job needs to see what
+        # happens they can add their own verified event into the queue.
+        # Nevertheless, we can update change with the new review in gerrit.
+
+        for cat in ['CRVW', 'VRFY', 'APRV']:
+            if cat in action:
+                change.addApproval(cat, action[cat], username=self.username)
+
+        if 'label' in action:
+            parts = action['label'].split('=')
+            change.addApproval(parts[0], parts[2], username=self.username)
+
         change.messages.append(message)
+
         if 'submit' in action:
             change.setMerged()
         if message:
@@ -598,6 +624,8 @@
             result = 'RUN_ERROR'
         else:
             data['result'] = result
+            data['node_labels'] = ['bare-necessities']
+            data['node_name'] = 'foo'
             work_fail = False
 
         changes = None
@@ -939,7 +967,19 @@
             args = [self.smtp_messages] + list(args)
             return FakeSMTP(*args, **kw)
 
-        zuul.lib.gerrit.Gerrit = FakeGerrit
+        # Set a changes database so multiple FakeGerrit's can report back to
+        # a virtual canonical database given by the configured hostname
+        self.gerrit_changes_dbs = {
+            self.config.get('gerrit', 'server'): {}
+        }
+
+        def FakeGerritFactory(*args, **kw):
+            kw['changes_dbs'] = self.gerrit_changes_dbs
+            return FakeGerrit(*args, **kw)
+
+        self.useFixture(fixtures.MonkeyPatch('zuul.lib.gerrit.Gerrit',
+                                             FakeGerritFactory))
+
         self.useFixture(fixtures.MonkeyPatch('smtplib.SMTP', FakeSMTPFactory))
 
         self.gerrit = FakeGerritTrigger(
@@ -1045,7 +1085,7 @@
         repo.create_tag('init')
 
         repo.head.reference = master
-        repo.head.reset(index=True, working_tree=True)
+        zuul.merger.merger.reset_repo_to_head(repo)
         repo.git.clean('-x', '-f', '-d')
 
         self.create_branch(project, 'mp')
@@ -1064,7 +1104,7 @@
         repo.index.commit('%s commit' % branch)
 
         repo.head.reference = repo.heads['master']
-        repo.head.reset(index=True, working_tree=True)
+        zuul.merger.merger.reset_repo_to_head(repo)
         repo.git.clean('-x', '-f', '-d')
 
     def ref_has_change(self, ref, change):
diff --git a/tests/fixtures/layout-disable-at.yaml b/tests/fixtures/layout-disable-at.yaml
new file mode 100644
index 0000000..a2b2526
--- /dev/null
+++ b/tests/fixtures/layout-disable-at.yaml
@@ -0,0 +1,21 @@
+pipelines:
+  - name: check
+    manager: IndependentPipelineManager
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+    disabled:
+      smtp:
+        to: you@example.com
+    disable-after-consecutive-failures: 3
+
+projects:
+  - name: org/project
+    check:
+      - project-test1
diff --git a/tests/fixtures/layout-dont-ignore-deletes.yaml b/tests/fixtures/layout-dont-ignore-deletes.yaml
new file mode 100644
index 0000000..1cf3c71
--- /dev/null
+++ b/tests/fixtures/layout-dont-ignore-deletes.yaml
@@ -0,0 +1,16 @@
+includes:
+  - python-file: custom_functions.py
+
+pipelines:
+  - name: post
+    manager: IndependentPipelineManager
+    trigger:
+      gerrit:
+        - event: ref-updated
+          ref: ^(?!refs/).*$
+          ignore-deletes: False
+
+projects:
+  - name: org/project
+    post:
+      - project-post
diff --git a/tests/fixtures/layout-live-reconfiguration-add-job.yaml b/tests/fixtures/layout-live-reconfiguration-add-job.yaml
new file mode 100644
index 0000000..e4aea6f
--- /dev/null
+++ b/tests/fixtures/layout-live-reconfiguration-add-job.yaml
@@ -0,0 +1,38 @@
+pipelines:
+  - name: gate
+    manager: DependentPipelineManager
+    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+jobs:
+  - name: ^.*-merge$
+    failure-message: Unable to merge change
+    hold-following-changes: true
+  - name: project-testfile
+    files:
+      - '.*-requires'
+
+projects:
+  - name: org/project
+    merge-mode: cherry-pick
+    gate:
+      - project-merge:
+        - project-test1
+        - project-test2
+        - project-test3
+        - project-testfile
diff --git a/tests/fixtures/layout-live-reconfiguration-del-project.yaml b/tests/fixtures/layout-live-reconfiguration-del-project.yaml
new file mode 100644
index 0000000..07ffb2e
--- /dev/null
+++ b/tests/fixtures/layout-live-reconfiguration-del-project.yaml
@@ -0,0 +1,21 @@
+pipelines:
+  - name: check
+    manager: IndependentPipelineManager
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+projects:
+  - name: org/project
+    merge-mode: cherry-pick
+    check:
+      - project-merge:
+        - project-test1
+        - project-test2
+        - project-testfile
diff --git a/tests/fixtures/layout-live-reconfiguration-failed-job.yaml b/tests/fixtures/layout-live-reconfiguration-failed-job.yaml
new file mode 100644
index 0000000..e811af1
--- /dev/null
+++ b/tests/fixtures/layout-live-reconfiguration-failed-job.yaml
@@ -0,0 +1,25 @@
+pipelines:
+  - name: check
+    manager: IndependentPipelineManager
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+jobs:
+  - name: ^.*-merge$
+    failure-message: Unable to merge change
+    hold-following-changes: true
+
+projects:
+  - name: org/project
+    merge-mode: cherry-pick
+    check:
+      - project-merge:
+        - project-test2
+        - project-testfile
diff --git a/tests/fixtures/layout-live-reconfiguration-shared-queue.yaml b/tests/fixtures/layout-live-reconfiguration-shared-queue.yaml
new file mode 100644
index 0000000..ad3f666
--- /dev/null
+++ b/tests/fixtures/layout-live-reconfiguration-shared-queue.yaml
@@ -0,0 +1,62 @@
+pipelines:
+  - name: check
+    manager: IndependentPipelineManager
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        verified: 1
+    failure:
+      gerrit:
+        verified: -1
+
+  - name: gate
+    manager: DependentPipelineManager
+    failure-message: Build failed.  For information on how to proceed, see http://wiki.example.org/Test_Failures
+    trigger:
+      gerrit:
+        - event: comment-added
+          approval:
+            - approved: 1
+    success:
+      gerrit:
+        verified: 2
+        submit: true
+    failure:
+      gerrit:
+        verified: -2
+    start:
+      gerrit:
+        verified: 0
+    precedence: high
+
+jobs:
+  - name: ^.*-merge$
+    failure-message: Unable to merge change
+    hold-following-changes: true
+  - name: project1-project2-integration
+    queue-name: integration
+
+projects:
+  - name: org/project1
+    check:
+      - project1-merge:
+        - project1-test1
+        - project1-test2
+    gate:
+      - project1-merge:
+        - project1-test1
+        - project1-test2
+
+  - name: org/project2
+    check:
+      - project2-merge:
+        - project2-test1
+        - project2-test2
+        - project1-project2-integration
+    gate:
+      - project2-merge:
+        - project2-test1
+        - project2-test2
+        - project1-project2-integration
diff --git a/tests/fixtures/layouts/good_layout.yaml b/tests/fixtures/layouts/good_layout.yaml
index fc2effd..9ba1806 100644
--- a/tests/fixtures/layouts/good_layout.yaml
+++ b/tests/fixtures/layouts/good_layout.yaml
@@ -20,6 +20,7 @@
       gerrit:
         - event: ref-updated
           ref: ^(?!refs/).*$
+          ignore-deletes: True
 
   - name: gate
     manager: DependentPipelineManager
diff --git a/tests/test_requirements.py b/tests/test_requirements.py
index 120e37e..4316925 100644
--- a/tests/test_requirements.py
+++ b/tests/test_requirements.py
@@ -52,13 +52,14 @@
         self.assertEqual(len(self.history), 0)
 
         # Add a too-old +1, should not be enqueued
-        A.addApproval('VRFY', 1, granted_on=time.time() - 72 * 60 * 60)
+        A.addApproval('VRFY', 1, username='jenkins',
+                      granted_on=time.time() - 72 * 60 * 60)
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # Add a recent +1
-        self.fake_gerrit.addEvent(A.addApproval('VRFY', 1))
+        self.fake_gerrit.addEvent(A.addApproval('VRFY', 1, username='jenkins'))
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
@@ -95,7 +96,8 @@
         self.assertEqual(len(self.history), 0)
 
         # Add an old +1 which should be enqueued
-        A.addApproval('VRFY', 1, granted_on=time.time() - 72 * 60 * 60)
+        A.addApproval('VRFY', 1, username='jenkins',
+                      granted_on=time.time() - 72 * 60 * 60)
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
@@ -126,7 +128,7 @@
         self.assertEqual(len(self.history), 0)
 
         # Add an approval from Jenkins
-        A.addApproval('VRFY', 1)
+        A.addApproval('VRFY', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
@@ -157,7 +159,7 @@
         self.assertEqual(len(self.history), 0)
 
         # Add an approval from Jenkins
-        A.addApproval('VRFY', 1)
+        A.addApproval('VRFY', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
@@ -188,13 +190,13 @@
         self.assertEqual(len(self.history), 0)
 
         # A -1 from jenkins should not cause it to be enqueued
-        A.addApproval('VRFY', -1)
+        A.addApproval('VRFY', -1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # A +1 should allow it to be enqueued
-        A.addApproval('VRFY', 1)
+        A.addApproval('VRFY', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
@@ -225,19 +227,19 @@
         self.assertEqual(len(self.history), 0)
 
         # A -1 from jenkins should not cause it to be enqueued
-        A.addApproval('VRFY', -1)
+        A.addApproval('VRFY', -1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
         # A -2 from jenkins should not cause it to be enqueued
-        A.addApproval('VRFY', -2)
+        A.addApproval('VRFY', -2, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 0)
 
-        # A +1 should allow it to be enqueued
-        A.addApproval('VRFY', 1)
+        # A +1 from jenkins should allow it to be enqueued
+        A.addApproval('VRFY', 1, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
@@ -251,7 +253,7 @@
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 1)
 
-        B.addApproval('VRFY', 2)
+        B.addApproval('VRFY', 2, username='jenkins')
         self.fake_gerrit.addEvent(comment)
         self.waitUntilSettled()
         self.assertEqual(len(self.history), 2)
diff --git a/tests/test_scheduler.py b/tests/test_scheduler.py
index 032a8f8..a257440 100755
--- a/tests/test_scheduler.py
+++ b/tests/test_scheduler.py
@@ -111,6 +111,9 @@
         self.assertReportedStat(
             'zuul.pipeline.gate.org.project.total_changes', value='1|c')
 
+        for build in self.builds:
+            self.assertEqual(build.parameters['ZUUL_VOTING'], '1')
+
     def test_initial_pipeline_gauges(self):
         "Test that each pipeline reported its length on start"
         pipeline_names = self.sched.layout.pipelines.keys()
@@ -891,6 +894,54 @@
         self.assertEqual(len(self.history), 1)
         self.assertIn('project-post', job_names)
 
+    def test_post_ignore_deletes(self):
+        "Test that deleting refs does not trigger post jobs"
+
+        e = {
+            "type": "ref-updated",
+            "submitter": {
+                "name": "User Name",
+            },
+            "refUpdate": {
+                "oldRev": "90f173846e3af9154517b88543ffbd1691f31366",
+                "newRev": "0000000000000000000000000000000000000000",
+                "refName": "master",
+                "project": "org/project",
+            }
+        }
+        self.fake_gerrit.addEvent(e)
+        self.waitUntilSettled()
+
+        job_names = [x.name for x in self.history]
+        self.assertEqual(len(self.history), 0)
+        self.assertNotIn('project-post', job_names)
+
+    def test_post_ignore_deletes_negative(self):
+        "Test that deleting refs does trigger post jobs"
+
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-dont-ignore-deletes.yaml')
+        self.sched.reconfigure(self.config)
+
+        e = {
+            "type": "ref-updated",
+            "submitter": {
+                "name": "User Name",
+            },
+            "refUpdate": {
+                "oldRev": "90f173846e3af9154517b88543ffbd1691f31366",
+                "newRev": "0000000000000000000000000000000000000000",
+                "refName": "master",
+                "project": "org/project",
+            }
+        }
+        self.fake_gerrit.addEvent(e)
+        self.waitUntilSettled()
+
+        job_names = [x.name for x in self.history]
+        self.assertEqual(len(self.history), 1)
+        self.assertIn('project-post', job_names)
+
     def test_build_configuration_branch(self):
         "Test that the right commits are on alternate branches"
 
@@ -1256,6 +1307,9 @@
             self.getJobFromHistory('nonvoting-project-test2').result,
             'FAILURE')
 
+        for build in self.builds:
+            self.assertEqual(build.parameters['ZUUL_VOTING'], '0')
+
     def test_check_queue_success(self):
         "Test successful check queue jobs."
 
@@ -2036,6 +2090,30 @@
         self.assertEqual(self.history[0].name, 'gate-noop')
         self.assertEqual(self.history[0].result, 'SUCCESS')
 
+    def test_file_head(self):
+        # This is a regression test for an observed bug.  A change
+        # with a file named "HEAD" in the root directory of the repo
+        # was processed by a merger.  It then was unable to reset the
+        # repo because of:
+        #   GitCommandError: 'git reset --hard HEAD' returned
+        #       with exit code 128
+        #   stderr: 'fatal: ambiguous argument 'HEAD': both revision
+        #       and filename
+        #   Use '--' to separate filenames from revisions'
+
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        A.addPatchset(['HEAD'])
+        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
+
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2))
+        self.waitUntilSettled()
+
+        self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.assertIn('Build succeeded', A.messages[0])
+        self.assertIn('Build succeeded', B.messages[0])
+
     def test_file_jobs(self):
         "Test that file jobs run only when appropriate"
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
@@ -2241,6 +2319,286 @@
         self.assertEqual(A.data['status'], 'MERGED')
         self.assertEqual(A.reported, 2)
 
+    def test_live_reconfiguration_merge_conflict(self):
+        # A real-world bug: a change in a gate queue has a merge
+        # conflict and a job is added to its project while it's
+        # sitting in the queue.  The job gets added to the change and
+        # enqueued and the change gets stuck.
+        self.worker.registerFunction('build:project-test3')
+        self.worker.hold_jobs_in_build = True
+
+        # This change is fine.  It's here to stop the queue long
+        # enough for the next change to be subject to the
+        # reconfiguration, as well as to provide a conflict for the
+        # next change.  This change will succeed and merge.
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        A.addPatchset(['conflict'])
+        A.addApproval('CRVW', 2)
+
+        # This change will be in merge conflict.  During the
+        # reconfiguration, we will add a job.  We want to make sure
+        # that doesn't cause it to get stuck.
+        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
+        B.addPatchset(['conflict'])
+        B.addApproval('CRVW', 2)
+
+        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+
+        self.waitUntilSettled()
+
+        # No jobs have run yet
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 1)
+        self.assertEqual(B.data['status'], 'NEW')
+        self.assertEqual(B.reported, 1)
+        self.assertEqual(len(self.history), 0)
+
+        # Add the "project-test3" job.
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-live-'
+                        'reconfiguration-add-job.yaml')
+        self.sched.reconfigure(self.config)
+        self.waitUntilSettled()
+
+        self.worker.hold_jobs_in_build = False
+        self.worker.release()
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2)
+        self.assertEqual(B.data['status'], 'NEW')
+        self.assertEqual(B.reported, 2)
+        self.assertEqual(self.getJobFromHistory('project-merge').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-test1').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-test2').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-test3').result,
+                         'SUCCESS')
+        self.assertEqual(len(self.history), 4)
+
+    def test_live_reconfiguration_failed_root(self):
+        # An extrapolation of test_live_reconfiguration_merge_conflict
+        # that tests a job added to a job tree with a failed root does
+        # not run.
+        self.worker.registerFunction('build:project-test3')
+        self.worker.hold_jobs_in_build = True
+
+        # This change is fine.  It's here to stop the queue long
+        # enough for the next change to be subject to the
+        # reconfiguration.  This change will succeed and merge.
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        A.addPatchset(['conflict'])
+        A.addApproval('CRVW', 2)
+        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.waitUntilSettled()
+        self.worker.release('.*-merge')
+        self.waitUntilSettled()
+
+        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
+        self.worker.addFailTest('project-merge', B)
+        B.addApproval('CRVW', 2)
+        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        self.waitUntilSettled()
+
+        self.worker.release('.*-merge')
+        self.waitUntilSettled()
+
+        # Both -merge jobs have run, but no others.
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 1)
+        self.assertEqual(B.data['status'], 'NEW')
+        self.assertEqual(B.reported, 1)
+        self.assertEqual(self.history[0].result, 'SUCCESS')
+        self.assertEqual(self.history[0].name, 'project-merge')
+        self.assertEqual(self.history[1].result, 'FAILURE')
+        self.assertEqual(self.history[1].name, 'project-merge')
+        self.assertEqual(len(self.history), 2)
+
+        # Add the "project-test3" job.
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-live-'
+                        'reconfiguration-add-job.yaml')
+        self.sched.reconfigure(self.config)
+        self.waitUntilSettled()
+
+        self.worker.hold_jobs_in_build = False
+        self.worker.release()
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2)
+        self.assertEqual(B.data['status'], 'NEW')
+        self.assertEqual(B.reported, 2)
+        self.assertEqual(self.history[0].result, 'SUCCESS')
+        self.assertEqual(self.history[0].name, 'project-merge')
+        self.assertEqual(self.history[1].result, 'FAILURE')
+        self.assertEqual(self.history[1].name, 'project-merge')
+        self.assertEqual(self.history[2].result, 'SUCCESS')
+        self.assertEqual(self.history[3].result, 'SUCCESS')
+        self.assertEqual(self.history[4].result, 'SUCCESS')
+        self.assertEqual(len(self.history), 5)
+
+    def test_live_reconfiguration_failed_job(self):
+        # Test that a change with a removed failing job does not
+        # disrupt reconfiguration.  If a change has a failed job and
+        # that job is removed during a reconfiguration, we observed a
+        # bug where the code to re-set build statuses would run on
+        # that build and raise an exception because the job no longer
+        # existed.
+        self.worker.hold_jobs_in_build = True
+
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+
+        # This change will fail and later be removed by the reconfiguration.
+        self.worker.addFailTest('project-test1', A)
+
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.worker.release('.*-merge')
+        self.waitUntilSettled()
+        self.worker.release('project-test1')
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 0)
+
+        self.assertEqual(self.getJobFromHistory('project-merge').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-test1').result,
+                         'FAILURE')
+        self.assertEqual(len(self.history), 2)
+
+        # Remove the test1 job.
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-live-'
+                        'reconfiguration-failed-job.yaml')
+        self.sched.reconfigure(self.config)
+        self.waitUntilSettled()
+
+        self.worker.hold_jobs_in_build = False
+        self.worker.release()
+        self.waitUntilSettled()
+
+        self.assertEqual(self.getJobFromHistory('project-test2').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project-testfile').result,
+                         'SUCCESS')
+        self.assertEqual(len(self.history), 4)
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 1)
+        self.assertIn('Build succeeded', A.messages[0])
+        # Ensure the removed job was not included in the report.
+        self.assertNotIn('project-test1', A.messages[0])
+
+    def test_live_reconfiguration_shared_queue(self):
+        # Test that a change with a failing job which was removed from
+        # this project but otherwise still exists in the system does
+        # not disrupt reconfiguration.
+
+        self.worker.hold_jobs_in_build = True
+
+        A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
+
+        self.worker.addFailTest('project1-project2-integration', A)
+
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.worker.release('.*-merge')
+        self.waitUntilSettled()
+        self.worker.release('project1-project2-integration')
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 0)
+
+        self.assertEqual(self.getJobFromHistory('project1-merge').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory(
+            'project1-project2-integration').result, 'FAILURE')
+        self.assertEqual(len(self.history), 2)
+
+        # Remove the integration job.
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-live-'
+                        'reconfiguration-shared-queue.yaml')
+        self.sched.reconfigure(self.config)
+        self.waitUntilSettled()
+
+        self.worker.hold_jobs_in_build = False
+        self.worker.release()
+        self.waitUntilSettled()
+
+        self.assertEqual(self.getJobFromHistory('project1-merge').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project1-test1').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory('project1-test2').result,
+                         'SUCCESS')
+        self.assertEqual(self.getJobFromHistory(
+            'project1-project2-integration').result, 'FAILURE')
+        self.assertEqual(len(self.history), 4)
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(A.reported, 1)
+        self.assertIn('Build succeeded', A.messages[0])
+        # Ensure the removed job was not included in the report.
+        self.assertNotIn('project1-project2-integration', A.messages[0])
+
+    def test_live_reconfiguration_del_project(self):
+        # Test project deletion from layout
+        # while changes are enqueued
+
+        self.worker.hold_jobs_in_build = True
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B')
+        C = self.fake_gerrit.addFakeChange('org/project1', 'master', 'C')
+
+        # A Depends-On: B
+        A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
+            A.subject, B.data['id'])
+        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.worker.release('.*-merge')
+        self.waitUntilSettled()
+        self.assertEqual(len(self.builds), 5)
+
+        # This layout defines only org/project, not org/project1
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-live-'
+                        'reconfiguration-del-project.yaml')
+        self.sched.reconfigure(self.config)
+        self.waitUntilSettled()
+
+        # Builds for C aborted, builds for A succeed,
+        # and have change B applied ahead
+        job_c = self.getJobFromHistory('project1-test1')
+        self.assertEqual(job_c.changes, '3,1')
+        self.assertEqual(job_c.result, 'ABORTED')
+
+        self.worker.hold_jobs_in_build = False
+        self.worker.release()
+        self.waitUntilSettled()
+
+        self.assertEqual(self.getJobFromHistory('project-test1').changes,
+                         '2,1 1,1')
+
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(B.data['status'], 'NEW')
+        self.assertEqual(C.data['status'], 'NEW')
+        self.assertEqual(A.reported, 1)
+        self.assertEqual(B.reported, 0)
+        self.assertEqual(C.reported, 0)
+
+        self.assertEqual(len(self.sched.layout.pipelines['check'].queues), 0)
+        self.assertIn('Build succeeded', A.messages[0])
+
     def test_live_reconfiguration_functions(self):
         "Test live reconfiguration with a custom function"
         self.worker.registerFunction('build:node-project-test1:debian')
@@ -2483,7 +2841,7 @@
         self.worker.release('.*')
         self.waitUntilSettled()
 
-    def test_client_enqueue(self):
+    def test_client_enqueue_change(self):
         "Test that the RPC client can enqueue a change"
         A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
         A.addApproval('CRVW', 2)
@@ -2506,6 +2864,24 @@
         self.assertEqual(A.reported, 2)
         self.assertEqual(r, True)
 
+    def test_client_enqueue_ref(self):
+        "Test that the RPC client can enqueue a ref"
+
+        client = zuul.rpcclient.RPCClient('127.0.0.1',
+                                          self.gearman_server.port)
+        r = client.enqueue_ref(
+            pipeline='post',
+            project='org/project',
+            trigger='gerrit',
+            ref='master',
+            oldrev='90f173846e3af9154517b88543ffbd1691f31366',
+            newrev='d479a0bfcb34da57a31adb2a595c0cf687812543')
+        self.waitUntilSettled()
+        job_names = [x.name for x in self.history]
+        self.assertEqual(len(self.history), 1)
+        self.assertIn('project-post', job_names)
+        self.assertEqual(r, True)
+
     def test_client_enqueue_negative(self):
         "Test that the RPC client returns errors"
         client = zuul.rpcclient.RPCClient('127.0.0.1',
@@ -2968,9 +3344,10 @@
         self.registerJobs()
 
         self.assertEqual(
-            "Merge Failed.\n\nThis change was unable to be automatically "
-            "merged with the current state of the repository. Please rebase "
-            "your change and upload a new patchset.",
+            "Merge Failed.\n\nThis change or one of its cross-repo "
+            "dependencies was unable to be automatically merged with the "
+            "current state of its repository. Please rebase the change and "
+            "upload a new patchset.",
             self.sched.layout.pipelines['check'].merge_failure_message)
         self.assertEqual(
             "The merge failed! For more information...",
@@ -3396,6 +3773,48 @@
         self.assertEqual(A.data['status'], 'NEW')
         self.assertEqual(B.data['status'], 'NEW')
 
+    def test_crd_gate_unknown(self):
+        "Test unknown projects in dependent pipeline"
+        self.init_repo("org/unknown")
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        B = self.fake_gerrit.addFakeChange('org/unknown', 'master', 'B')
+        A.addApproval('CRVW', 2)
+        B.addApproval('CRVW', 2)
+
+        # A Depends-On: B
+        A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
+            A.subject, B.data['id'])
+
+        B.addApproval('APRV', 1)
+        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.waitUntilSettled()
+
+        # Unknown projects cannot share a queue with any other
+        # since they don't have common jobs with any other (they have no jobs).
+        # Changes which depend on unknown project changes
+        # should not be processed in dependent pipeline
+        self.assertEqual(A.data['status'], 'NEW')
+        self.assertEqual(B.data['status'], 'NEW')
+        self.assertEqual(A.reported, 0)
+        self.assertEqual(B.reported, 0)
+        self.assertEqual(len(self.history), 0)
+
+        # Simulate change B being gated outside this layout
+        self.fake_gerrit.addEvent(B.addApproval('APRV', 1))
+        B.setMerged()
+        self.waitUntilSettled()
+        self.assertEqual(len(self.history), 0)
+
+        # Now that B is merged, A should be able to be enqueued and
+        # merged.
+        self.fake_gerrit.addEvent(A.addApproval('APRV', 1))
+        self.waitUntilSettled()
+
+        self.assertEqual(A.data['status'], 'MERGED')
+        self.assertEqual(A.reported, 2)
+        self.assertEqual(B.data['status'], 'MERGED')
+        self.assertEqual(B.reported, 0)
+
     def test_crd_check(self):
         "Test cross-repo dependencies in independent pipelines"
 
@@ -3510,12 +3929,12 @@
         self.assertIn('Build succeeded', A.messages[0])
         self.assertIn('Build succeeded', B.messages[0])
 
-    def test_crd_check_reconfiguration(self):
+    def _test_crd_check_reconfiguration(self, project1, project2):
         "Test cross-repo dependencies re-enqueued in independent pipelines"
 
         self.gearman_server.hold_jobs_in_queue = True
-        A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
-        B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
+        A = self.fake_gerrit.addFakeChange(project1, 'master', 'A')
+        B = self.fake_gerrit.addFakeChange(project2, 'master', 'B')
 
         # A Depends-On: B
         A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % (
@@ -3548,6 +3967,17 @@
         self.assertEqual(self.history[0].changes, '2,1 1,1')
         self.assertEqual(len(self.sched.layout.pipelines['check'].queues), 0)
 
+    def test_crd_check_reconfiguration(self):
+        self._test_crd_check_reconfiguration('org/project1', 'org/project2')
+
+    def test_crd_undefined_project(self):
+        """Test that undefined projects in dependencies are handled for
+        independent pipelines"""
+        # It's a hack for fake gerrit,
+        # as it implies repo creation upon the creation of any change
+        self.init_repo("org/unknown")
+        self._test_crd_check_reconfiguration('org/project1', 'org/unknown')
+
     def test_crd_check_ignore_dependencies(self):
         "Test cross-repo dependencies can be ignored"
         self.config.set('zuul', 'layout_config',
@@ -3632,3 +4062,125 @@
         self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2))
         self.waitUntilSettled()
         self.assertEqual(self.history[-1].changes, '3,2 2,1 1,2')
+
+    def test_disable_at(self):
+        "Test a pipeline will only report to the disabled trigger when failing"
+
+        self.config.set('zuul', 'layout_config',
+                        'tests/fixtures/layout-disable-at.yaml')
+        self.sched.reconfigure(self.config)
+
+        self.assertEqual(3, self.sched.layout.pipelines['check'].disable_at)
+        self.assertEqual(
+            0, self.sched.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B')
+        C = self.fake_gerrit.addFakeChange('org/project', 'master', 'C')
+        D = self.fake_gerrit.addFakeChange('org/project', 'master', 'D')
+        E = self.fake_gerrit.addFakeChange('org/project', 'master', 'E')
+        F = self.fake_gerrit.addFakeChange('org/project', 'master', 'F')
+        G = self.fake_gerrit.addFakeChange('org/project', 'master', 'G')
+        H = self.fake_gerrit.addFakeChange('org/project', 'master', 'H')
+        I = self.fake_gerrit.addFakeChange('org/project', 'master', 'I')
+        J = self.fake_gerrit.addFakeChange('org/project', 'master', 'J')
+        K = self.fake_gerrit.addFakeChange('org/project', 'master', 'K')
+
+        self.worker.addFailTest('project-test1', A)
+        self.worker.addFailTest('project-test1', B)
+        # Let C pass, resetting the counter
+        self.worker.addFailTest('project-test1', D)
+        self.worker.addFailTest('project-test1', E)
+        self.worker.addFailTest('project-test1', F)
+        self.worker.addFailTest('project-test1', G)
+        self.worker.addFailTest('project-test1', H)
+        # I also passes but should only report to the disabled reporters
+        self.worker.addFailTest('project-test1', J)
+        self.worker.addFailTest('project-test1', K)
+
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.assertEqual(
+            2, self.sched.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+
+        self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.assertEqual(
+            0, self.sched.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+
+        self.fake_gerrit.addEvent(D.getPatchsetCreatedEvent(1))
+        self.fake_gerrit.addEvent(E.getPatchsetCreatedEvent(1))
+        self.fake_gerrit.addEvent(F.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        # We should be disabled now
+        self.assertEqual(
+            3, self.sched.layout.pipelines['check']._consecutive_failures)
+        self.assertTrue(self.sched.layout.pipelines['check']._disabled)
+
+        # We need to wait between each of these patches to make sure the
+        # smtp messages come back in an expected order
+        self.fake_gerrit.addEvent(G.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.fake_gerrit.addEvent(H.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.fake_gerrit.addEvent(I.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        # The first 6 (ABCDEF) jobs should have reported back to gerrt thus
+        # leaving a message on each change
+        self.assertEqual(1, len(A.messages))
+        self.assertIn('Build failed.', A.messages[0])
+        self.assertEqual(1, len(B.messages))
+        self.assertIn('Build failed.', B.messages[0])
+        self.assertEqual(1, len(C.messages))
+        self.assertIn('Build succeeded.', C.messages[0])
+        self.assertEqual(1, len(D.messages))
+        self.assertIn('Build failed.', D.messages[0])
+        self.assertEqual(1, len(E.messages))
+        self.assertIn('Build failed.', E.messages[0])
+        self.assertEqual(1, len(F.messages))
+        self.assertIn('Build failed.', F.messages[0])
+
+        # The last 3 (GHI) would have only reported via smtp.
+        self.assertEqual(3, len(self.smtp_messages))
+        self.assertEqual(0, len(G.messages))
+        self.assertIn('Build failed.', self.smtp_messages[0]['body'])
+        self.assertIn('/7/1/check', self.smtp_messages[0]['body'])
+        self.assertEqual(0, len(H.messages))
+        self.assertIn('Build failed.', self.smtp_messages[1]['body'])
+        self.assertIn('/8/1/check', self.smtp_messages[1]['body'])
+        self.assertEqual(0, len(I.messages))
+        self.assertIn('Build succeeded.', self.smtp_messages[2]['body'])
+        self.assertIn('/9/1/check', self.smtp_messages[2]['body'])
+
+        # Now reload the configuration (simulate a HUP) to check the pipeline
+        # comes out of disabled
+        self.sched.reconfigure(self.config)
+
+        self.assertEqual(3, self.sched.layout.pipelines['check'].disable_at)
+        self.assertEqual(
+            0, self.sched.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+
+        self.fake_gerrit.addEvent(J.getPatchsetCreatedEvent(1))
+        self.fake_gerrit.addEvent(K.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+
+        self.assertEqual(
+            2, self.sched.layout.pipelines['check']._consecutive_failures)
+        self.assertFalse(self.sched.layout.pipelines['check']._disabled)
+
+        # J and K went back to gerrit
+        self.assertEqual(1, len(J.messages))
+        self.assertIn('Build failed.', J.messages[0])
+        self.assertEqual(1, len(K.messages))
+        self.assertIn('Build failed.', K.messages[0])
+        # No more messages reported via smtp
+        self.assertEqual(3, len(self.smtp_messages))
diff --git a/tests/test_zuultrigger.py b/tests/test_zuultrigger.py
index 2f0e4f0..0d52fc9 100644
--- a/tests/test_zuultrigger.py
+++ b/tests/test_zuultrigger.py
@@ -107,9 +107,10 @@
         self.assertEqual(E.reported, 0)
         self.assertEqual(
             B.messages[0],
-            "Merge Failed.\n\nThis change was unable to be automatically "
-            "merged with the current state of the repository. Please rebase "
-            "your change and upload a new patchset.")
+            "Merge Failed.\n\nThis change or one of its cross-repo "
+            "dependencies was unable to be automatically merged with the "
+            "current state of its repository. Please rebase the change and "
+            "upload a new patchset.")
 
         self.assertTrue("project:org/project status:open" in
                         self.fake_gerrit.queries)
@@ -133,8 +134,9 @@
         self.assertEqual(E.reported, 1)
         self.assertEqual(
             E.messages[0],
-            "Merge Failed.\n\nThis change was unable to be automatically "
-            "merged with the current state of the repository. Please rebase "
-            "your change and upload a new patchset.")
+            "Merge Failed.\n\nThis change or one of its cross-repo "
+            "dependencies was unable to be automatically merged with the "
+            "current state of its repository. Please rebase the change and "
+            "upload a new patchset.")
         self.assertEqual(self.fake_gerrit.queries[1],
                          "project:org/project status:open")
diff --git a/zuul/cmd/client.py b/zuul/cmd/client.py
index bc2c152..59ac419 100644
--- a/zuul/cmd/client.py
+++ b/zuul/cmd/client.py
@@ -56,6 +56,24 @@
                                  required=True)
         cmd_enqueue.set_defaults(func=self.enqueue)
 
+        cmd_enqueue = subparsers.add_parser('enqueue-ref',
+                                            help='enqueue a ref')
+        cmd_enqueue.add_argument('--trigger', help='trigger name',
+                                 required=True)
+        cmd_enqueue.add_argument('--pipeline', help='pipeline name',
+                                 required=True)
+        cmd_enqueue.add_argument('--project', help='project name',
+                                 required=True)
+        cmd_enqueue.add_argument('--ref', help='ref name',
+                                 required=True)
+        cmd_enqueue.add_argument(
+            '--oldrev', help='old revision',
+            default='0000000000000000000000000000000000000000')
+        cmd_enqueue.add_argument(
+            '--newrev', help='new revision',
+            default='0000000000000000000000000000000000000000')
+        cmd_enqueue.set_defaults(func=self.enqueue_ref)
+
         cmd_promote = subparsers.add_parser('promote',
                                             help='promote one or more changes')
         cmd_promote.add_argument('--pipeline', help='pipeline name',
@@ -82,6 +100,9 @@
         show_running_jobs.set_defaults(func=self.show_running_jobs)
 
         self.args = parser.parse_args()
+        if self.args.func == self.enqueue_ref:
+            if self.args.oldrev == self.args.newrev:
+                parser.error("The old and new revisions must not be the same.")
 
     def setup_logging(self):
         """Client logging does not rely on conf file"""
@@ -112,6 +133,16 @@
                            change=self.args.change)
         return r
 
+    def enqueue_ref(self):
+        client = zuul.rpcclient.RPCClient(self.server, self.port)
+        r = client.enqueue_ref(pipeline=self.args.pipeline,
+                               project=self.args.project,
+                               trigger=self.args.trigger,
+                               ref=self.args.ref,
+                               oldrev=self.args.oldrev,
+                               newrev=self.args.newrev)
+        return r
+
     def promote(self):
         client = zuul.rpcclient.RPCClient(self.server, self.port)
         r = client.promote(pipeline=self.args.pipeline,
@@ -232,6 +263,12 @@
             'number': {
                 'title': 'Number'
             },
+            'node_labels': {
+                'title': 'Node Labels'
+            },
+            'node_name': {
+                'title': 'Node Name'
+            },
             'worker.name': {
                 'title': 'Worker'
             },
@@ -245,7 +282,7 @@
             'worker.fqdn': {
                 'title': 'Worker Domain'
             },
-            'worker.progam': {
+            'worker.program': {
                 'title': 'Worker Program'
             },
             'worker.version': {
diff --git a/zuul/cmd/cloner.py b/zuul/cmd/cloner.py
index 5bbe3a0..e4a0e7b 100755
--- a/zuul/cmd/cloner.py
+++ b/zuul/cmd/cloner.py
@@ -77,7 +77,7 @@
         )
 
         zuul_env = parser.add_argument_group(
-            'zuul environnement',
+            'zuul environment',
             'Let you override $ZUUL_* environment variables.'
         )
         for zuul_suffix in ZUUL_ENV_SUFFIXES:
@@ -88,17 +88,14 @@
             )
 
         args = parser.parse_args()
+        # Validate ZUUL_* arguments. If ref is provided then URL is required.
+        zuul_args = [zuul_opt for zuul_opt, val in vars(args).items()
+                     if zuul_opt.startswith('zuul') and val is not None]
+        if 'zuul_ref' in zuul_args and 'zuul_url' not in zuul_args:
+            parser.error("Specifying a Zuul ref requires a Zuul url. "
+                         "Define Zuul arguments either via environment "
+                         "variables or using options above.")
 
-        # Validate ZUUL_* arguments. If any ZUUL_* argument is set they
-        # must all be set, otherwise fallback to defaults.
-        zuul_missing = [zuul_opt for zuul_opt, val in vars(args).items()
-                        if zuul_opt.startswith('zuul') and val is None]
-        if (len(zuul_missing) > 0 and
-            len(zuul_missing) < len(ZUUL_ENV_SUFFIXES)):
-            parser.error(("Some Zuul parameters are not set:\n\t%s\n"
-                          "Define them either via environment variables or "
-                          "using options above." %
-                          "\n\t".join(sorted(zuul_missing))))
         self.args = args
 
     def setup_logging(self, color=False, verbose=False):
diff --git a/zuul/cmd/server.py b/zuul/cmd/server.py
index 832eae4..2d99a1f 100755
--- a/zuul/cmd/server.py
+++ b/zuul/cmd/server.py
@@ -62,7 +62,10 @@
         signal.signal(signal.SIGHUP, signal.SIG_IGN)
         self.read_config()
         self.setup_logging('zuul', 'log_config')
-        self.sched.reconfigure(self.config)
+        try:
+            self.sched.reconfigure(self.config)
+        except Exception:
+            self.log.exception("Reconfiguration failed:")
         signal.signal(signal.SIGHUP, self.reconfigure_handler)
 
     def exit_handler(self, signum, frame):
@@ -118,7 +121,12 @@
             import gear
             statsd_host = os.environ.get('STATSD_HOST')
             statsd_port = int(os.environ.get('STATSD_PORT', 8125))
+            if self.config.has_option('gearman_server', 'listen_address'):
+                host = self.config.get('gearman_server', 'listen_address')
+            else:
+                host = None
             gear.Server(4730,
+                        host=host,
                         statsd_host=statsd_host,
                         statsd_port=statsd_port,
                         statsd_prefix='zuul.geard')
diff --git a/zuul/launcher/gearman.py b/zuul/launcher/gearman.py
index 57ac5ca..69fb71b 100644
--- a/zuul/launcher/gearman.py
+++ b/zuul/launcher/gearman.py
@@ -277,6 +277,7 @@
                       ZUUL_PROJECT=item.change.project.name)
         params['ZUUL_PIPELINE'] = pipeline.name
         params['ZUUL_URL'] = item.current_build_set.zuul_url
+        params['ZUUL_VOTING'] = job.voting and '1' or '0'
         if hasattr(item.change, 'refspec'):
             changes_str = '^'.join(
                 ['%s:%s:%s' % (i.change.project.name, i.change.branch,
@@ -342,8 +343,7 @@
         build.parameters = params
 
         if job.name == 'noop':
-            build.result = 'SUCCESS'
-            self.sched.onBuildCompleted(build)
+            self.sched.onBuildCompleted(build, 'SUCCESS')
             return build
 
         gearman_job = gear.Job(name, json.dumps(params),
@@ -423,16 +423,17 @@
 
         build = self.builds.get(job.unique)
         if build:
+            data = getJobData(job)
+            build.node_labels = data.get('node_labels', [])
+            build.node_name = data.get('node_name')
             if not build.canceled:
                 if result is None:
-                    data = getJobData(job)
                     result = data.get('result')
                 if result is None:
                     build.retry = True
                 self.log.info("Build %s complete, result %s" %
                               (job, result))
-                build.result = result
-                self.sched.onBuildCompleted(build)
+                self.sched.onBuildCompleted(build, result)
             # The test suite expects the build to be removed from the
             # internal dict after it's added to the report queue.
             del self.builds[job.unique]
diff --git a/zuul/layoutvalidator.py b/zuul/layoutvalidator.py
index 88d10e2..781f475 100644
--- a/zuul/layoutvalidator.py
+++ b/zuul/layoutvalidator.py
@@ -61,6 +61,7 @@
                       'username': toList(str),
                       'branch': toList(str),
                       'ref': toList(str),
+                      'ignore-deletes': bool,
                       'approval': toList(variable_dict),
                       'require-approval': toList(require_approval),
                       }
@@ -112,6 +113,9 @@
                 'failure': report_actions,
                 'merge-failure': report_actions,
                 'start': report_actions,
+                'disabled': report_actions,
+                'disable-after-consecutive-failures':
+                    v.All(int, v.Range(min=1)),
                 'window': window,
                 'window-floor': window_floor,
                 'window-increase-type': window_type,
diff --git a/zuul/lib/cloner.py b/zuul/lib/cloner.py
index d697648..0ac7f0f 100644
--- a/zuul/lib/cloner.py
+++ b/zuul/lib/cloner.py
@@ -138,8 +138,11 @@
         if project in self.project_branches:
             indicated_branch = self.project_branches[project]
 
-        override_zuul_ref = re.sub(self.zuul_branch, indicated_branch,
-                                   self.zuul_ref)
+        if indicated_branch:
+            override_zuul_ref = re.sub(self.zuul_branch, indicated_branch,
+                                       self.zuul_ref)
+        else:
+            override_zuul_ref = None
 
         if indicated_branch and repo.hasBranch(indicated_branch):
             self.log.info("upstream repo has branch %s", indicated_branch)
@@ -150,14 +153,18 @@
             # FIXME should be origin HEAD branch which might not be 'master'
             fallback_branch = 'master'
 
-        fallback_zuul_ref = re.sub(self.zuul_branch, fallback_branch,
-                                   self.zuul_ref)
+        if self.zuul_branch:
+            fallback_zuul_ref = re.sub(self.zuul_branch, fallback_branch,
+                                       self.zuul_ref)
+        else:
+            fallback_zuul_ref = None
 
         # If we have a non empty zuul_ref to use, use it. Otherwise we fall
         # back to checking out the branch.
         if ((override_zuul_ref and
             self.fetchFromZuul(repo, project, override_zuul_ref)) or
-            (fallback_zuul_ref != override_zuul_ref and
+            (fallback_zuul_ref and
+             fallback_zuul_ref != override_zuul_ref and
             self.fetchFromZuul(repo, project, fallback_zuul_ref))):
             # Work around a bug in GitPython which can not parse FETCH_HEAD
             gitcmd = git.Git(dest)
@@ -169,9 +176,9 @@
             # Checkout branch
             self.log.info("Falling back to branch %s", fallback_branch)
             try:
-                repo.checkout('remotes/origin/%s' % fallback_branch)
+                commit = repo.checkout('remotes/origin/%s' % fallback_branch)
             except (ValueError, GitCommandError):
                 self.log.exception("Fallback branch not found: %s",
                                    fallback_branch)
-            self.log.info("Prepared %s repo with branch %s",
-                          project, fallback_branch)
+            self.log.info("Prepared %s repo with branch %s at commit %s",
+                          project, fallback_branch, commit)
diff --git a/zuul/lib/gerrit.py b/zuul/lib/gerrit.py
index 025de3d..5889ed0 100644
--- a/zuul/lib/gerrit.py
+++ b/zuul/lib/gerrit.py
@@ -146,7 +146,7 @@
 
     def simpleQuery(self, query):
         def _query_chunk(query):
-            args = '--current-patch-set'
+            args = '--commit-message --current-patch-set'
 
             cmd = 'gerrit query --format json %s %s' % (
                 args, query)
diff --git a/zuul/merger/merger.py b/zuul/merger/merger.py
index f36b974..1e881bf 100644
--- a/zuul/merger/merger.py
+++ b/zuul/merger/merger.py
@@ -20,6 +20,21 @@
 import zuul.model
 
 
+def reset_repo_to_head(repo):
+    # This lets us reset the repo even if there is a file in the root
+    # directory named 'HEAD'.  Currently, GitPython does not allow us
+    # to instruct it to always include the '--' to disambiguate.  This
+    # should no longer be necessary if this PR merges:
+    #   https://github.com/gitpython-developers/GitPython/pull/319
+    try:
+        repo.git.reset('--hard', 'HEAD', '--')
+    except git.GitCommandError as e:
+        # git nowadays may use 1 as status to indicate there are still unstaged
+        # modifications after the reset
+        if e.status != 1:
+            raise
+
+
 class ZuulReference(git.Reference):
     _common_path_default = "refs/zuul"
     _points_to_commits_only = True
@@ -71,9 +86,9 @@
         return repo
 
     def reset(self):
-        repo = self.createRepoObject()
         self.log.debug("Resetting repository %s" % self.local_path)
         self.update()
+        repo = self.createRepoObject()
         origin = repo.remotes.origin
         for ref in origin.refs:
             if ref.remote_head == 'HEAD':
@@ -82,7 +97,7 @@
 
         # Reset to remote HEAD (usually origin/master)
         repo.head.reference = origin.refs['HEAD']
-        repo.head.reset(index=True, working_tree=True)
+        reset_repo_to_head(repo)
         repo.git.clean('-x', '-f', '-d')
 
     def prune(self):
@@ -114,7 +129,8 @@
         repo = self.createRepoObject()
         self.log.debug("Checking out %s" % ref)
         repo.head.reference = ref
-        repo.head.reset(index=True, working_tree=True)
+        reset_repo_to_head(repo)
+        return repo.head.commit
 
     def cherryPick(self, ref):
         repo = self.createRepoObject()
@@ -152,7 +168,7 @@
 
     def createZuulRef(self, ref, commit='HEAD'):
         repo = self.createRepoObject()
-        self.log.debug("CreateZuulRef %s at %s" % (ref, commit))
+        self.log.debug("CreateZuulRef %s at %s on %s" % (ref, commit, repo))
         ref = ZuulReference.create(repo, ref, commit)
         return ref.commit
 
diff --git a/zuul/model.py b/zuul/model.py
index 4b907c3..1b94029 100644
--- a/zuul/model.py
+++ b/zuul/model.py
@@ -22,6 +22,8 @@
                                   'ordereddict.OrderedDict'])
 
 
+EMPTY_GIT_REF = '0' * 40  # git sha of all zeros, used during creates/deletes
+
 MERGER_MERGE = 1          # "git merge"
 MERGER_MERGE_RESOLVE = 2  # "git merge -s resolve"
 MERGER_CHERRY_PICK = 3    # "git cherry-pick"
@@ -82,6 +84,10 @@
         self.start_actions = None
         self.success_actions = None
         self.failure_actions = None
+        self.disabled_actions = None
+        self.disable_at = None
+        self._consecutive_failures = 0
+        self._disabled = False
         self.window = None
         self.window_floor = None
         self.window_increase_type = None
@@ -431,9 +437,13 @@
 
 
 class Project(object):
-    def __init__(self, name):
+    def __init__(self, name, foreign=False):
         self.name = name
         self.merge_mode = MERGER_MERGE_RESOLVE
+        # foreign projects are those referenced in dependencies
+        # of layout projects, this should matter
+        # when deciding whether to enqueue their changes
+        self.foreign = foreign
 
     def __str__(self):
         return self.name
@@ -543,6 +553,9 @@
             t = JobTree(job)
             self.job_trees.append(t)
             return t
+        for tree in self.job_trees:
+            if tree.job == job:
+                return tree
 
     def getJobs(self):
         jobs = []
@@ -578,6 +591,8 @@
         self.retry = False
         self.parameters = {}
         self.worker = Worker()
+        self.node_labels = []
+        self.node_name = None
 
     def __repr__(self):
         return ('<Build %s of %s on %s>' %
@@ -799,7 +814,9 @@
                 'canceled': build.canceled if build else None,
                 'retry': build.retry if build else None,
                 'number': build.number if build else None,
-                'worker': worker
+                'node_labels': build.node_labels if build else [],
+                'node_name': build.node_name if build else None,
+                'worker': worker,
             })
 
         if self.pipeline.haveAllJobsStarted(self):
@@ -1084,7 +1101,8 @@
 class EventFilter(BaseFilter):
     def __init__(self, trigger, types=[], branches=[], refs=[],
                  event_approvals={}, comments=[], emails=[], usernames=[],
-                 timespecs=[], required_approvals=[], pipelines=[]):
+                 timespecs=[], required_approvals=[], pipelines=[],
+                 ignore_deletes=True):
         super(EventFilter, self).__init__(
             required_approvals=required_approvals)
         self.trigger = trigger
@@ -1104,6 +1122,7 @@
         self.pipelines = [re.compile(x) for x in pipelines]
         self.event_approvals = event_approvals
         self.timespecs = timespecs
+        self.ignore_deletes = ignore_deletes
 
     def __repr__(self):
         ret = '<EventFilter'
@@ -1116,6 +1135,8 @@
             ret += ' branches: %s' % ', '.join(self._branches)
         if self._refs:
             ret += ' refs: %s' % ', '.join(self._refs)
+        if self.ignore_deletes:
+            ret += ' ignore_deletes: %s' % self.ignore_deletes
         if self.event_approvals:
             ret += ' event_approvals: %s' % ', '.join(
                 ['%s:%s' % a for a in self.event_approvals.items()])
@@ -1167,6 +1188,10 @@
                     matches_ref = True
         if self.refs and not matches_ref:
             return False
+        if self.ignore_deletes and event.newrev == EMPTY_GIT_REF:
+            # If the updated ref has an empty git sha (all 0s),
+            # then the ref is being deleted
+            return False
 
         # comments are ORed
         matches_comment_re = False
diff --git a/zuul/rpcclient.py b/zuul/rpcclient.py
index f43c3b9..609f636 100644
--- a/zuul/rpcclient.py
+++ b/zuul/rpcclient.py
@@ -56,6 +56,16 @@
                 }
         return not self.submitJob('zuul:enqueue', data).failure
 
+    def enqueue_ref(self, pipeline, project, trigger, ref, oldrev, newrev):
+        data = {'pipeline': pipeline,
+                'project': project,
+                'trigger': trigger,
+                'ref': ref,
+                'oldrev': oldrev,
+                'newrev': newrev,
+                }
+        return not self.submitJob('zuul:enqueue_ref', data).failure
+
     def promote(self, pipeline, change_ids):
         data = {'pipeline': pipeline,
                 'change_ids': change_ids,
diff --git a/zuul/rpclistener.py b/zuul/rpclistener.py
index 05b8d03..d54da9f 100644
--- a/zuul/rpclistener.py
+++ b/zuul/rpclistener.py
@@ -48,6 +48,7 @@
 
     def register(self):
         self.worker.registerFunction("zuul:enqueue")
+        self.worker.registerFunction("zuul:enqueue_ref")
         self.worker.registerFunction("zuul:promote")
         self.worker.registerFunction("zuul:get_running_jobs")
 
@@ -83,7 +84,7 @@
             except Exception:
                 self.log.exception("Exception while getting job")
 
-    def handle_enqueue(self, job):
+    def _common_enqueue(self, job):
         args = json.loads(job.arguments)
         event = model.TriggerEvent()
         errors = ''
@@ -106,6 +107,11 @@
         else:
             errors += 'Invalid pipeline: %s\n' % (args['pipeline'],)
 
+        return (args, event, errors, pipeline, project)
+
+    def handle_enqueue(self, job):
+        (args, event, errors, pipeline, project) = self._common_enqueue(job)
+
         if not errors:
             event.change_number, event.patch_number = args['change'].split(',')
             try:
@@ -119,6 +125,20 @@
             self.sched.enqueue(event)
             job.sendWorkComplete()
 
+    def handle_enqueue_ref(self, job):
+        (args, event, errors, pipeline, project) = self._common_enqueue(job)
+
+        if not errors:
+            event.ref = args['ref']
+            event.oldrev = args['oldrev']
+            event.newrev = args['newrev']
+
+        if errors:
+            job.sendWorkException(errors.encode('utf8'))
+        else:
+            self.sched.enqueue(event)
+            job.sendWorkComplete()
+
     def handle_promote(self, job):
         args = json.loads(job.arguments)
         pipeline_name = args['pipeline']
diff --git a/zuul/scheduler.py b/zuul/scheduler.py
index 9620036..6987e7f 100644
--- a/zuul/scheduler.py
+++ b/zuul/scheduler.py
@@ -272,9 +272,10 @@
             pipeline.failure_message = conf_pipeline.get('failure-message',
                                                          "Build failed.")
             pipeline.merge_failure_message = conf_pipeline.get(
-                'merge-failure-message', "Merge Failed.\n\nThis change was "
-                "unable to be automatically merged with the current state of "
-                "the repository. Please rebase your change and upload a new "
+                'merge-failure-message', "Merge Failed.\n\nThis change or one "
+                "of its cross-repo dependencies was unable to be "
+                "automatically merged with the current state of its "
+                "repository. Please rebase the change and upload a new "
                 "patchset.")
             pipeline.success_message = conf_pipeline.get('success-message',
                                                          "Build succeeded.")
@@ -285,7 +286,8 @@
                 'ignore-dependencies', False)
 
             action_reporters = {}
-            for action in ['start', 'success', 'failure', 'merge-failure']:
+            for action in ['start', 'success', 'failure', 'merge-failure',
+                           'disabled']:
                 action_reporters[action] = []
                 if conf_pipeline.get(action):
                     for reporter_name, params \
@@ -299,12 +301,16 @@
             pipeline.start_actions = action_reporters['start']
             pipeline.success_actions = action_reporters['success']
             pipeline.failure_actions = action_reporters['failure']
+            pipeline.disabled_actions = action_reporters['disabled']
             if len(action_reporters['merge-failure']) > 0:
                 pipeline.merge_failure_actions = \
                     action_reporters['merge-failure']
             else:
                 pipeline.merge_failure_actions = action_reporters['failure']
 
+            pipeline.disable_at = conf_pipeline.get(
+                'disable-after-consecutive-failures', None)
+
             pipeline.window = conf_pipeline.get('window', 20)
             pipeline.window_floor = conf_pipeline.get('window-floor', 3)
             pipeline.window_increase_type = conf_pipeline.get(
@@ -347,6 +353,7 @@
                     usernames = toList(trigger.get('username'))
                     if not usernames:
                         usernames = toList(trigger.get('username_filter'))
+                    ignore_deletes = trigger.get('ignore-deletes', True)
                     f = EventFilter(
                         trigger=self.triggers['gerrit'],
                         types=toList(trigger['event']),
@@ -358,7 +365,8 @@
                         usernames=usernames,
                         required_approvals=toList(
                             trigger.get('require-approval')
-                        )
+                        ),
+                        ignore_deletes=ignore_deletes
                     )
                     manager.event_filters.append(f)
             if 'timer' in conf_pipeline['trigger']:
@@ -471,10 +479,6 @@
                         config_project.update(
                             {pipeline.name: expanded[pipeline.name] +
                              config_project.get(pipeline.name, [])})
-            # TODO: future enhancement -- handle the case where
-            # duplicate jobs have different children and you want all
-            # of the children to run after a single run of the
-            # parent).
 
             layout.projects[config_project['name']] = project
             mode = config_project.get('merge-mode', 'merge-resolve')
@@ -510,11 +514,15 @@
             name = reporter.name
         self.reporters[name] = reporter
 
-    def getProject(self, name):
+    def getProject(self, name, create_foreign=False):
         self.layout_lock.acquire()
         p = None
         try:
             p = self.layout.projects.get(name)
+            if p is None and create_foreign:
+                self.log.info("Registering foreign project: %s" % name)
+                p = Project(name, foreign=True)
+                self.layout.projects[name] = p
         finally:
             self.layout_lock.release()
         return p
@@ -533,25 +541,52 @@
     def onBuildStarted(self, build):
         self.log.debug("Adding start event for build: %s" % build)
         build.start_time = time.time()
+        try:
+            if statsd and build.pipeline:
+                jobname = build.job.name.replace('.', '_')
+                key = 'zuul.pipeline.%s.job.%s.wait_time' % (
+                    build.pipeline.name, jobname)
+                dt = int((build.start_time - build.launch_time) * 1000)
+                statsd.timing(key, dt)
+                statsd.incr(key)
+        except:
+            self.log.exception("Exception reporting runtime stats")
         event = BuildStartedEvent(build)
         self.result_event_queue.put(event)
         self.wake_event.set()
         self.log.debug("Done adding start event for build: %s" % build)
 
-    def onBuildCompleted(self, build):
-        self.log.debug("Adding complete event for build: %s" % build)
+    def onBuildCompleted(self, build, result):
+        self.log.debug("Adding complete event for build: %s result: %s" % (
+            build, result))
         build.end_time = time.time()
+        # Note, as soon as the result is set, other threads may act
+        # upon this, even though the event hasn't been fully
+        # processed.  Ensure that any other data from the event (eg,
+        # timing) is recorded before setting the result.
+        build.result = result
         try:
             if statsd and build.pipeline:
                 jobname = build.job.name.replace('.', '_')
+                key = 'zuul.pipeline.%s.all_jobs' % build.pipeline.name
+                statsd.incr(key)
+                for label in build.node_labels:
+                    # Jenkins includes the node name in its list of labels, so
+                    # we filter it out here, since that is not statistically
+                    # interesting.
+                    if label == build.node_name:
+                        continue
+                    dt = int((build.start_time - build.launch_time) * 1000)
+                    key = 'zuul.node_type.%s.job.%s.wait_time' % (
+                        label, jobname)
+                    statsd.timing(key, dt)
+                    statsd.incr(key)
                 key = 'zuul.pipeline.%s.job.%s.%s' % (build.pipeline.name,
                                                       jobname, build.result)
                 if build.result in ['SUCCESS', 'FAILURE'] and build.start_time:
                     dt = int((build.end_time - build.start_time) * 1000)
                     statsd.timing(key, dt)
                 statsd.incr(key)
-                key = 'zuul.pipeline.%s.all_jobs' % build.pipeline.name
-                statsd.incr(key)
         except:
             self.log.exception("Exception reporting runtime stats")
         event = BuildCompletedEvent(build)
@@ -672,7 +707,7 @@
                     continue
                 self.log.debug("Re-enqueueing changes for pipeline %s" % name)
                 items_to_remove = []
-                builds_to_remove = []
+                builds_to_cancel = []
                 last_head = None
                 for shared_queue in old_pipeline.queues:
                     for item in shared_queue.queue:
@@ -682,28 +717,30 @@
                         item.items_behind = []
                         item.pipeline = None
                         item.queue = None
-                        project = layout.projects.get(item.change.project.name)
-                        if not project:
-                            self.log.warning("Unable to find project for "
-                                             "change %s while reenqueueing" %
-                                             item.change)
-                            item.change.project = None
-                            items_to_remove.append(item)
-                            continue
-                        item.change.project = project
+                        project_name = item.change.project.name
+                        item.change.project = layout.projects.get(project_name)
+                        if not item.change.project:
+                            self.log.debug("Project %s not defined, "
+                                           "re-instantiating as foreign" %
+                                           project_name)
+                            project = Project(project_name, foreign=True)
+                            layout.projects[project_name] = project
+                            item.change.project = project
+                        item_jobs = new_pipeline.getJobs(item)
                         for build in item.current_build_set.getBuilds():
                             job = layout.jobs.get(build.job.name)
-                            if job:
+                            if job and job in item_jobs:
                                 build.job = job
                             else:
-                                builds_to_remove.append(build)
+                                item.removeBuild(build)
+                                builds_to_cancel.append(build)
                         if not new_pipeline.manager.reEnqueueItem(item,
                                                                   last_head):
                             items_to_remove.append(item)
                 for item in items_to_remove:
                     for build in item.current_build_set.getBuilds():
-                        builds_to_remove.append(build)
-                for build in builds_to_remove:
+                        builds_to_cancel.append(build)
+                for build in builds_to_cancel:
                     self.log.warning(
                         "Canceling build %s during reconfiguration" % (build,))
                     try:
@@ -856,7 +893,7 @@
         self.log.debug("Processing trigger event %s" % event)
         try:
             project = self.layout.projects.get(event.project_name)
-            if not project:
+            if not project or project.foreign:
                 self.log.debug("Project %s not found" % event.project_name)
                 return
 
@@ -1040,6 +1077,8 @@
         self.log.info("    %s" % self.pipeline.failure_actions)
         self.log.info("  On merge-failure:")
         self.log.info("    %s" % self.pipeline.merge_failure_actions)
+        self.log.info("  When disabled:")
+        self.log.info("    %s" % self.pipeline.disabled_actions)
 
     def getSubmitAllowNeeds(self):
         # Get a list of code review labels that are allowed to be
@@ -1081,19 +1120,20 @@
         return False
 
     def reportStart(self, change):
-        try:
-            self.log.info("Reporting start, action %s change %s" %
-                          (self.pipeline.start_actions, change))
-            msg = "Starting %s jobs." % self.pipeline.name
-            if self.sched.config.has_option('zuul', 'status_url'):
-                msg += "\n" + self.sched.config.get('zuul', 'status_url')
-            ret = self.sendReport(self.pipeline.start_actions,
-                                  change, msg)
-            if ret:
-                self.log.error("Reporting change start %s received: %s" %
-                               (change, ret))
-        except:
-            self.log.exception("Exception while reporting start:")
+        if not self.pipeline._disabled:
+            try:
+                self.log.info("Reporting start, action %s change %s" %
+                              (self.pipeline.start_actions, change))
+                msg = "Starting %s jobs." % self.pipeline.name
+                if self.sched.config.has_option('zuul', 'status_url'):
+                    msg += "\n" + self.sched.config.get('zuul', 'status_url')
+                ret = self.sendReport(self.pipeline.start_actions,
+                                      change, msg)
+                if ret:
+                    self.log.error("Reporting change start %s received: %s" %
+                                   (change, ret))
+            except:
+                self.log.exception("Exception while reporting start:")
 
     def sendReport(self, action_reporters, change, message):
         """Sends the built message off to configured reporters.
@@ -1176,6 +1216,18 @@
                 self.log.debug("Re-enqueing change %s in queue %s" %
                                (item.change, change_queue))
                 change_queue.enqueueItem(item)
+
+                # Re-set build results in case any new jobs have been
+                # added to the tree.
+                for build in item.current_build_set.getBuilds():
+                    if build.result:
+                        self.pipeline.setResult(item, build)
+                # Similarly, reset the item state.
+                if item.current_build_set.unable_to_merge:
+                    self.pipeline.setUnableToMerge(item)
+                if item.dequeued_needing_change:
+                    self.pipeline.setDequeuedNeedingChange(item)
+
                 self.reportStats(item)
                 return True
             else:
@@ -1343,6 +1395,7 @@
                                    "for change %s" % (build, item.change))
             build.result = 'CANCELED'
             canceled = True
+        self.updateBuildDescriptions(old_build_set)
         for item_behind in item.items_behind:
             self.log.debug("Canceling jobs for change %s, behind change %s" %
                            (item_behind.change, item.change))
@@ -1350,7 +1403,7 @@
                 canceled = True
         return canceled
 
-    def _processOneItem(self, item, nnfi, ready_ahead):
+    def _processOneItem(self, item, nnfi):
         changed = False
         item_ahead = item.item_ahead
         if item_ahead and (not item_ahead.live):
@@ -1370,7 +1423,7 @@
                     self.reportItem(item)
                 except MergeFailure:
                     pass
-            return (True, nnfi, ready_ahead)
+            return (True, nnfi)
         dep_items = self.getFailingDependentItems(item)
         actionable = change_queue.isActionable(item)
         item.active = actionable
@@ -1397,9 +1450,7 @@
                 if item.current_build_set.unable_to_merge:
                     failing_reasons.append("it has a merge conflict")
                     ready = False
-        if not ready:
-            ready_ahead = False
-        if actionable and ready_ahead and self.launchJobs(item):
+        if actionable and ready and self.launchJobs(item):
             changed = True
         if self.pipeline.didAnyJobFail(item):
             failing_reasons.append("at least one job failed")
@@ -1426,7 +1477,7 @@
         if failing_reasons:
             self.log.debug("%s is a failing item because %s" %
                            (item, failing_reasons))
-        return (changed, nnfi, ready_ahead)
+        return (changed, nnfi)
 
     def processQueue(self):
         # Do whatever needs to be done for each change in the queue
@@ -1435,10 +1486,9 @@
         for queue in self.pipeline.queues:
             queue_changed = False
             nnfi = None  # Nearest non-failing item
-            ready_ahead = True  # All build sets ahead are ready
             for item in queue.queue[:]:
-                item_changed, nnfi, ready_ahhead = self._processOneItem(
-                    item, nnfi, ready_ahead)
+                item_changed, nnfi = self._processOneItem(
+                    item, nnfi)
                 if item_changed:
                     queue_changed = True
                 self.reportStats(item)
@@ -1466,7 +1516,6 @@
 
     def onBuildStarted(self, build):
         self.log.debug("Build %s started" % build)
-        self.updateBuildDescriptions(build.build_set)
         return True
 
     def onBuildCompleted(self, build):
@@ -1476,7 +1525,6 @@
         self.pipeline.setResult(item, build)
         self.log.debug("Item %s status is now:\n %s" %
                        (item, item.formatStatus()))
-        self.updateBuildDescriptions(build.build_set)
         return True
 
     def onMergeCompleted(self, event):
@@ -1531,12 +1579,22 @@
             self.log.debug("success %s" % (self.pipeline.success_actions))
             actions = self.pipeline.success_actions
             item.setReportedResult('SUCCESS')
+            self.pipeline._consecutive_failures = 0
         elif not self.pipeline.didMergerSucceed(item):
             actions = self.pipeline.merge_failure_actions
             item.setReportedResult('MERGER_FAILURE')
         else:
             actions = self.pipeline.failure_actions
             item.setReportedResult('FAILURE')
+            self.pipeline._consecutive_failures += 1
+        if self.pipeline._disabled:
+            actions = self.pipeline.disabled_actions
+        # Check here if we should disable so that we only use the disabled
+        # reporters /after/ the last disable_at failure is still reported as
+        # normal.
+        if (self.pipeline.disable_at and not self.pipeline._disabled and
+            self.pipeline._consecutive_failures >= self.pipeline.disable_at):
+            self.pipeline._disabled = True
         if actions:
             report = self.formatReport(item)
             try:
@@ -1784,10 +1842,11 @@
         if existing:
             return DynamicChangeQueueContextManager(existing)
         if change.project not in self.pipeline.getProjects():
-            return DynamicChangeQueueContextManager(None)
+            self.pipeline.addProject(change.project)
         change_queue = ChangeQueue(self.pipeline)
         change_queue.addProject(change.project)
         self.pipeline.addQueue(change_queue)
+        self.log.debug("Dynamically created queue %s", change_queue)
         return DynamicChangeQueueContextManager(change_queue)
 
     def enqueueChangesAhead(self, change, quiet, ignore_requirements,
diff --git a/zuul/trigger/gerrit.py b/zuul/trigger/gerrit.py
index 175e3f8..05d7581 100644
--- a/zuul/trigger/gerrit.py
+++ b/zuul/trigger/gerrit.py
@@ -94,7 +94,7 @@
                     Can not get account information." % event.type)
             event.account = None
 
-        if event.change_number:
+        if event.change_number and self.sched.getProject(event.project_name):
             # Call _getChange for the side effect of updating the
             # cache.  Note that this modifies Change objects outside
             # the main thread.
@@ -404,7 +404,11 @@
         if 'project' not in data:
             raise Exception("Change %s,%s not found" % (change.number,
                                                         change.patchset))
-        change.project = self.sched.getProject(data['project'])
+        # If updated changed came as a dependent on
+        # and its project is not defined,
+        # then create a 'foreign' project for it in layout
+        change.project = self.sched.getProject(data['project'],
+                                               create_foreign=bool(history))
         change.branch = data['branch']
         change.url = data['url']
         max_ps = 0