Merge "Remove pep8 and pyflakes from test-requirements"
diff --git a/doc/source/admin/components.rst b/doc/source/admin/components.rst
index d6b0984..2e18b51 100644
--- a/doc/source/admin/components.rst
+++ b/doc/source/admin/components.rst
@@ -575,6 +575,16 @@
       The executor will observe system load and determine whether
       to accept more jobs every 30 seconds.
 
+   .. attr:: min_avail_mem
+      :default: 5.0
+
+      This is the minimum percentage of system RAM available. The
+      executor will stop accepting more than 1 job at a time until
+      more memory is available. The available memory percentage is
+      calculated from the total available memory divided by the
+      total real memory multiplied by 100. Buffers and cache are
+      considered available in the calculation.
+
    .. attr:: hostname
       :default: hostname of the server
 
diff --git a/doc/source/admin/monitoring.rst b/doc/source/admin/monitoring.rst
index e6e6139..6dbdb31 100644
--- a/doc/source/admin/monitoring.rst
+++ b/doc/source/admin/monitoring.rst
@@ -26,7 +26,7 @@
 
 These metrics are emitted by the Zuul :ref:`scheduler`:
 
-.. stat:: zuul.event.<driver>.event.<type>
+.. stat:: zuul.event.<driver>.<type>
    :type: counter
 
    Zuul will report counters for each type of event it receives from
diff --git a/doc/source/user/config.rst b/doc/source/user/config.rst
index 7fdf96e..597062e 100644
--- a/doc/source/user/config.rst
+++ b/doc/source/user/config.rst
@@ -692,6 +692,11 @@
       attribute to apply this behavior to a subset of a job's
       projects.
 
+      This value is also used to help select which variants of a job
+      to run.  If ``override-checkout`` is set, then Zuul will use
+      this value instead of the branch of the item being tested when
+      collecting jobs to run.
+
    .. attr:: timeout
 
       The time in seconds that the job should be allowed to run before
@@ -837,6 +842,12 @@
          :attr:`job.override-checkout` attribute to apply the same
          behavior to all projects in a job.
 
+         This value is also used to help select which variants of a
+         job to run.  If ``override-checkout`` is set, then Zuul will
+         use this value instead of the branch of the item being tested
+         when collecting any jobs to run which are defined in this
+         project.
+
    .. attr:: vars
 
       A dictionary of variables to supply to Ansible.  When inheriting
@@ -895,6 +906,12 @@
       branch of an item, then that job is not run for the item.
       Otherwise, all of the job variants which match that branch (and
       any other selection criteria) are used when freezing the job.
+      However, if :attr:`job.override-checkout` or
+      :attr:`job.required-projects.override-checkout` are set for a
+      project, Zuul will attempt to use the job variants which match
+      the values supplied in ``override-checkout`` for jobs defined in
+      those projects.  This can be used to run a job defined in one
+      project on another project without a matching branch.
 
       This example illustrates a job called *run-tests* which uses a
       nodeset based on the current release of an operating system to
@@ -1179,11 +1196,11 @@
 `allowed-projects` job attribute can be used to restrict the projects
 which can invoke that job.
 
-Secrets, like most configuration items, are globally unique, though a
-secret may be defined on multiple branches of the same project as long
-as the contents are the same.  This is to aid in branch maintenance,
-so that creating a new branch based on an existing branch will not
-immediately produce a configuration error.
+Secrets, like most configuration items, are unique within a tenant,
+though a secret may be defined on multiple branches of the same
+project as long as the contents are the same.  This is to aid in
+branch maintenance, so that creating a new branch based on an existing
+branch will not immediately produce a configuration error.
 
 .. attr:: secret
 
@@ -1213,11 +1230,11 @@
 groups of node types once and referring to them by name, job
 configuration may be simplified.
 
-Nodesets, like most configuration items, are globally unique, though a
-nodeset may be defined on multiple branches of the same project as long
-as the contents are the same.  This is to aid in branch maintenance,
-so that creating a new branch based on an existing branch will not
-immediately produce a configuration error.
+Nodesets, like most configuration items, are unique within a tenant,
+though a nodeset may be defined on multiple branches of the same
+project as long as the contents are the same.  This is to aid in
+branch maintenance, so that creating a new branch based on an existing
+branch will not immediately produce a configuration error.
 
 .. code-block:: yaml
 
@@ -1301,11 +1318,11 @@
 represents the maximum number of jobs which use that semaphore at the
 same time.
 
-Semaphores, like most configuration items, are globally unique, though
-a semaphore may be defined on multiple branches of the same project as
-long as the value is the same.  This is to aid in branch maintenance,
-so that creating a new branch based on an existing branch will not
-immediately produce a configuration error.
+Semaphores, like most configuration items, are unique within a tenant,
+though a semaphore may be defined on multiple branches of the same
+project as long as the value is the same.  This is to aid in branch
+maintenance, so that creating a new branch based on an existing branch
+will not immediately produce a configuration error.
 
 Semaphores are never subject to dynamic reconfiguration.  If the value
 of a semaphore is changed, it will take effect only when the change
diff --git a/doc/source/user/jobs.rst b/doc/source/user/jobs.rst
index 9ec4646..820e316 100644
--- a/doc/source/user/jobs.rst
+++ b/doc/source/user/jobs.rst
@@ -281,14 +281,6 @@
             msg: "Project {{ item.name }} is at {{ item.src_dir }}
           with_items: {{ zuul.projects.values() | list }}
 
-
-   .. var:: _projects
-      :type: dict
-
-      The same as ``projects`` but a dictionary indexed by the
-      ``name`` value of each entry.  ``projects`` will be converted to
-      this.
-
    .. var:: tenant
 
       The name of the current Zuul tenant.
diff --git a/requirements.txt b/requirements.txt
index 39a2b02..3ab5850 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -27,3 +27,4 @@
 iso8601
 aiohttp
 uvloop;python_version>='3.5'
+psutil
diff --git a/tests/base.py b/tests/base.py
index c449242..fe01399 100755
--- a/tests/base.py
+++ b/tests/base.py
@@ -606,7 +606,7 @@
         return ret
 
     def getGitUrl(self, project):
-        return os.path.join(self.upstream_root, project.name)
+        return 'file://' + os.path.join(self.upstream_root, project.name)
 
 
 class GithubChangeReference(git.Reference):
@@ -1794,18 +1794,6 @@
         else:
             self._log_stream = sys.stdout
 
-        # NOTE(jeblair): this is temporary extra debugging to try to
-        # track down a possible leak.
-        orig_git_repo_init = git.Repo.__init__
-
-        def git_repo_init(myself, *args, **kw):
-            orig_git_repo_init(myself, *args, **kw)
-            self.log.debug("Created git repo 0x%x %s" %
-                           (id(myself), repr(myself)))
-
-        self.useFixture(fixtures.MonkeyPatch('git.Repo.__init__',
-                                             git_repo_init))
-
         handler = logging.StreamHandler(self._log_stream)
         formatter = logging.Formatter('%(asctime)s %(name)-32s '
                                       '%(levelname)-8s %(message)s')
@@ -1960,6 +1948,9 @@
         self.config.set(
             'executor', 'command_socket',
             os.path.join(self.test_root, 'executor.socket'))
+        self.config.set(
+            'merger', 'command_socket',
+            os.path.join(self.test_root, 'merger.socket'))
 
         self.statsd = FakeStatsd()
         if self.config.has_section('statsd'):
@@ -2016,6 +2007,7 @@
             self.config, self.sched)
         self.merge_client = zuul.merger.client.MergeClient(
             self.config, self.sched)
+        self.merge_server = None
         self.nodepool = zuul.nodepool.Nodepool(self.sched)
         self.zk = zuul.zk.ZooKeeper()
         self.zk.connect(self.zk_config)
@@ -2290,6 +2282,8 @@
         self.executor_server.release()
         self.executor_client.stop()
         self.merge_client.stop()
+        if self.merge_server:
+            self.merge_server.stop()
         self.executor_server.stop()
         self.sched.stop()
         self.sched.join()
@@ -2362,6 +2356,13 @@
         zuul.merger.merger.reset_repo_to_head(repo)
         repo.git.clean('-x', '-f', '-d')
 
+    def delete_branch(self, project, branch):
+        path = os.path.join(self.upstream_root, project)
+        repo = git.Repo(path)
+        repo.head.reference = repo.heads['master']
+        zuul.merger.merger.reset_repo_to_head(repo)
+        repo.delete_head(repo.heads[branch], force=True)
+
     def create_commit(self, project):
         path = os.path.join(self.upstream_root, project)
         repo = git.Repo(path)
diff --git a/tests/fixtures/config/allowed-projects/git/common-config/playbooks/base.yaml b/tests/fixtures/config/allowed-projects/git/common-config/playbooks/base.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/allowed-projects/git/common-config/playbooks/base.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/allowed-projects/git/common-config/zuul.yaml b/tests/fixtures/config/allowed-projects/git/common-config/zuul.yaml
new file mode 100644
index 0000000..3000df5
--- /dev/null
+++ b/tests/fixtures/config/allowed-projects/git/common-config/zuul.yaml
@@ -0,0 +1,27 @@
+- pipeline:
+    name: check
+    manager: independent
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        Verified: 1
+    failure:
+      gerrit:
+        Verified: -1
+
+- job:
+    name: base
+    run: playbooks/base.yaml
+    parent: null
+
+- job:
+    name: restricted-job
+    allowed-projects:
+      - org/project1
+    
+- project:
+    name: common-config
+    check:
+      jobs: []
diff --git a/tests/fixtures/config/allowed-projects/git/org_project1/zuul.yaml b/tests/fixtures/config/allowed-projects/git/org_project1/zuul.yaml
new file mode 100644
index 0000000..d3c98f3
--- /dev/null
+++ b/tests/fixtures/config/allowed-projects/git/org_project1/zuul.yaml
@@ -0,0 +1,10 @@
+- job:
+    name: test-project1
+    parent: restricted-job
+      
+- project:
+    name: org/project1
+    check:
+      jobs:
+        - test-project1
+        - restricted-job
diff --git a/tests/fixtures/config/allowed-projects/git/org_project2/zuul.yaml b/tests/fixtures/config/allowed-projects/git/org_project2/zuul.yaml
new file mode 100644
index 0000000..bf0f07a
--- /dev/null
+++ b/tests/fixtures/config/allowed-projects/git/org_project2/zuul.yaml
@@ -0,0 +1,11 @@
+- job:
+    name: test-project2
+    parent: restricted-job
+    allowed-projects:
+      - org/project2
+    
+- project:
+    name: org/project2
+    check:
+      jobs:
+        - test-project2
diff --git a/tests/fixtures/config/allowed-projects/git/org_project3/zuul.yaml b/tests/fixtures/config/allowed-projects/git/org_project3/zuul.yaml
new file mode 100644
index 0000000..43b59a6
--- /dev/null
+++ b/tests/fixtures/config/allowed-projects/git/org_project3/zuul.yaml
@@ -0,0 +1,5 @@
+- project:
+    name: org/project3
+    check:
+      jobs:
+        - restricted-job
diff --git a/tests/fixtures/config/allowed-projects/main.yaml b/tests/fixtures/config/allowed-projects/main.yaml
new file mode 100644
index 0000000..49ed838
--- /dev/null
+++ b/tests/fixtures/config/allowed-projects/main.yaml
@@ -0,0 +1,10 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-projects:
+          - common-config
+        untrusted-projects:
+          - org/project1
+          - org/project2
+          - org/project3
diff --git a/tests/fixtures/config/branch-mismatch/git/common-config/playbooks/base.yaml b/tests/fixtures/config/branch-mismatch/git/common-config/playbooks/base.yaml
new file mode 100644
index 0000000..f679dce
--- /dev/null
+++ b/tests/fixtures/config/branch-mismatch/git/common-config/playbooks/base.yaml
@@ -0,0 +1,2 @@
+- hosts: all
+  tasks: []
diff --git a/tests/fixtures/config/branch-mismatch/git/common-config/zuul.yaml b/tests/fixtures/config/branch-mismatch/git/common-config/zuul.yaml
new file mode 100644
index 0000000..9954846
--- /dev/null
+++ b/tests/fixtures/config/branch-mismatch/git/common-config/zuul.yaml
@@ -0,0 +1,22 @@
+- pipeline:
+    name: check
+    manager: independent
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        Verified: 1
+    failure:
+      gerrit:
+        Verified: -1
+
+- job:
+    name: base
+    parent: null
+    run: playbooks/base.yaml
+
+- project:
+    name: common-config
+    check:
+      jobs: []
diff --git a/tests/fixtures/config/branch-mismatch/git/org_project1/README b/tests/fixtures/config/branch-mismatch/git/org_project1/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/branch-mismatch/git/org_project1/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/branch-mismatch/git/org_project1/zuul.yaml b/tests/fixtures/config/branch-mismatch/git/org_project1/zuul.yaml
new file mode 100644
index 0000000..809f830
--- /dev/null
+++ b/tests/fixtures/config/branch-mismatch/git/org_project1/zuul.yaml
@@ -0,0 +1,7 @@
+- job:
+    name: project-test1
+
+- project:
+    check:
+      jobs:
+        - project-test1
diff --git a/tests/fixtures/config/branch-mismatch/git/org_project2/README b/tests/fixtures/config/branch-mismatch/git/org_project2/README
new file mode 100644
index 0000000..9daeafb
--- /dev/null
+++ b/tests/fixtures/config/branch-mismatch/git/org_project2/README
@@ -0,0 +1 @@
+test
diff --git a/tests/fixtures/config/branch-mismatch/git/org_project2/zuul.yaml b/tests/fixtures/config/branch-mismatch/git/org_project2/zuul.yaml
new file mode 100644
index 0000000..3a8e9df
--- /dev/null
+++ b/tests/fixtures/config/branch-mismatch/git/org_project2/zuul.yaml
@@ -0,0 +1,13 @@
+- job:
+    name: project-test2
+    parent: project-test1
+    override-checkout: stable
+
+- project:
+    check:
+      jobs:
+        - project-test1:
+            required-projects:
+              - name: org/project1
+                override-checkout: stable
+        - project-test2
diff --git a/tests/fixtures/config/branch-mismatch/main.yaml b/tests/fixtures/config/branch-mismatch/main.yaml
new file mode 100644
index 0000000..950b117
--- /dev/null
+++ b/tests/fixtures/config/branch-mismatch/main.yaml
@@ -0,0 +1,9 @@
+- tenant:
+    name: tenant-one
+    source:
+      gerrit:
+        config-projects:
+          - common-config
+        untrusted-projects:
+          - org/project1
+          - org/project2
diff --git a/tests/fixtures/layouts/branch-deletion.yaml b/tests/fixtures/layouts/branch-deletion.yaml
new file mode 100644
index 0000000..f72902a
--- /dev/null
+++ b/tests/fixtures/layouts/branch-deletion.yaml
@@ -0,0 +1,34 @@
+- pipeline:
+    name: check
+    manager: independent
+    trigger:
+      gerrit:
+        - event: patchset-created
+    success:
+      gerrit:
+        Verified: 1
+    failure:
+      gerrit:
+        Verified: -1
+
+- job:
+    name: base
+    parent: null
+    run: playbooks/base.yaml
+
+- job:
+    name: project-test1
+    parent: base
+    branches: master
+
+- job:
+    name: project-test2
+    parent: base
+    branches: stable
+
+- project:
+    name: org/project
+    check:
+      jobs:
+        - project-test1
+        - project-test2
diff --git a/tests/unit/test_model.py b/tests/unit/test_model.py
index 784fcb3..5c586ca 100644
--- a/tests/unit/test_model.py
+++ b/tests/unit/test_model.py
@@ -320,50 +320,6 @@
                 "to shadow job base in base_project"):
             layout.addJob(base2)
 
-    def test_job_allowed_projects(self):
-        job = configloader.JobParser.fromYaml(self.tenant, self.layout, {
-            '_source_context': self.context,
-            '_start_mark': self.start_mark,
-            'name': 'job',
-            'parent': None,
-            'allowed-projects': ['project'],
-        })
-        self.layout.addJob(job)
-
-        project2 = model.Project('project2', self.source)
-        tpc2 = model.TenantProjectConfig(project2)
-        self.tenant.addUntrustedProject(tpc2)
-        context2 = model.SourceContext(project2, 'master',
-                                       'test', True)
-
-        project_template_parser = configloader.ProjectTemplateParser(
-            self.tenant, self.layout)
-        project_parser = configloader.ProjectParser(
-            self.tenant, self.layout, project_template_parser)
-        project2_config = project_parser.fromYaml(
-            [{
-                '_source_context': context2,
-                '_start_mark': self.start_mark,
-                'name': 'project2',
-                'gate': {
-                    'jobs': [
-                        'job'
-                    ]
-                }
-            }]
-        )
-        self.layout.addProjectConfig(project2_config)
-
-        change = model.Change(project2)
-        # Test master
-        change.branch = 'master'
-        item = self.queue.enqueueChange(change)
-        item.layout = self.layout
-        with testtools.ExpectedException(
-                Exception,
-                "Project project2 is not allowed to run job job"):
-            item.freezeJobGraph()
-
     def test_job_pipeline_allow_untrusted_secrets(self):
         self.pipeline.post_review = False
         job = configloader.JobParser.fromYaml(self.tenant, self.layout, {
diff --git a/tests/unit/test_scheduler.py b/tests/unit/test_scheduler.py
index 5db20b3..3d76510 100755
--- a/tests/unit/test_scheduler.py
+++ b/tests/unit/test_scheduler.py
@@ -161,6 +161,44 @@
         self.assertEqual(self.getJobFromHistory('project-test1').node,
                          'label2')
 
+    @simple_layout('layouts/branch-deletion.yaml')
+    def test_branch_deletion(self):
+        "Test the correct variant of a job runs on a branch"
+        self._startMerger()
+        for f in list(self.executor_server.merger_worker.functions.keys()):
+            f = str(f)
+            if f.startswith('merger:'):
+                self.executor_server.merger_worker.unRegisterFunction(f)
+
+        self.create_branch('org/project', 'stable')
+        A = self.fake_gerrit.addFakeChange('org/project', 'stable', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.assertEqual(self.getJobFromHistory('project-test2').result,
+                         'SUCCESS')
+
+        self.delete_branch('org/project', 'stable')
+        path = os.path.join(self.executor_src_root, 'review.example.com')
+        shutil.rmtree(path)
+
+        self.executor_server.hold_jobs_in_build = True
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        build = self.builds[0]
+
+        # Make sure there is no stable branch in the checked out git repo.
+        pname = 'review.example.com/org/project'
+        work = build.getWorkspaceRepos([pname])
+        work = work[pname]
+        heads = set([str(x) for x in work.heads])
+        self.assertEqual(heads, set(['master']))
+        self.executor_server.hold_jobs_in_build = False
+        build.release()
+        self.waitUntilSettled()
+        self.assertEqual(self.getJobFromHistory('project-test1').result,
+                         'SUCCESS')
+
     def test_parallel_changes(self):
         "Test that changes are tested in parallel and merged in series"
 
@@ -4934,7 +4972,7 @@
         return repo_messages
 
     def _test_merge(self, mode):
-        us_path = os.path.join(
+        us_path = 'file://' + os.path.join(
             self.upstream_root, 'org/project-%s' % mode)
         expected_messages = [
             'initial commit',
diff --git a/tests/unit/test_v3.py b/tests/unit/test_v3.py
index 4af5b47..44eda82 100755
--- a/tests/unit/test_v3.py
+++ b/tests/unit/test_v3.py
@@ -497,6 +497,72 @@
         self.waitUntilSettled()
 
 
+class TestBranchMismatch(ZuulTestCase):
+    tenant_config_file = 'config/branch-mismatch/main.yaml'
+
+    def test_job_override_branch(self):
+        "Test that override-checkout overrides branch matchers as well"
+
+        # Make sure the parent job repo is branched, so it gets
+        # implied branch matchers.
+        self.create_branch('org/project1', 'stable')
+        self.fake_gerrit.addEvent(
+            self.fake_gerrit.getFakeBranchCreatedEvent(
+                'org/project1', 'stable'))
+
+        # The child job repo should have a branch which does not exist
+        # in the parent job repo.
+        self.create_branch('org/project2', 'devel')
+        self.fake_gerrit.addEvent(
+            self.fake_gerrit.getFakeBranchCreatedEvent(
+                'org/project2', 'devel'))
+
+        # A job in a repo with a weird branch name should use the
+        # parent job from the parent job's master (default) branch.
+        A = self.fake_gerrit.addFakeChange('org/project2', 'devel', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        # project-test2 should run because it inherits from
+        # project-test1 and we will use the fallback branch to find
+        # project-test1 variants, but project-test1 itself, even
+        # though it is in the project-pipeline config, should not run
+        # because it doesn't directly match.
+        self.assertHistory([
+            dict(name='project-test1', result='SUCCESS', changes='1,1'),
+            dict(name='project-test2', result='SUCCESS', changes='1,1'),
+        ], ordered=False)
+
+
+class TestAllowedProjects(ZuulTestCase):
+    tenant_config_file = 'config/allowed-projects/main.yaml'
+
+    def test_allowed_projects(self):
+        A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.assertEqual(A.reported, 1)
+        self.assertIn('Build succeeded', A.messages[0])
+
+        B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
+        self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.assertEqual(B.reported, 1)
+        self.assertIn('Project org/project2 is not allowed '
+                      'to run job test-project2', B.messages[0])
+
+        C = self.fake_gerrit.addFakeChange('org/project3', 'master', 'C')
+        self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.assertEqual(C.reported, 1)
+        self.assertIn('Project org/project3 is not allowed '
+                      'to run job restricted-job', C.messages[0])
+
+        self.assertHistory([
+            dict(name='test-project1', result='SUCCESS', changes='1,1'),
+            dict(name='restricted-job', result='SUCCESS', changes='1,1'),
+        ], ordered=False)
+
+
 class TestCentralJobs(ZuulTestCase):
     tenant_config_file = 'config/central-jobs/main.yaml'
 
diff --git a/zuul/ansible/action/normal.py b/zuul/ansible/action/normal.py
index 152f13f..35ae8cb 100644
--- a/zuul/ansible/action/normal.py
+++ b/zuul/ansible/action/normal.py
@@ -63,6 +63,8 @@
 
         Block any access of files outside the zuul work dir.
         '''
+        if self._task.args.get('get_mime') is not None:
+            raise AnsibleError("get_mime on localhost is forbidden")
         paths._fail_if_unsafe(self._task.args['path'])
 
     def handle_file(self):
diff --git a/zuul/ansible/library/command.py b/zuul/ansible/library/command.py
old mode 100644
new mode 100755
index 0fc6129..9be3e2f
--- a/zuul/ansible/library/command.py
+++ b/zuul/ansible/library/command.py
@@ -371,9 +371,12 @@
 
         # ZUUL: Replaced the excution loop with the zuul_runner run function
         cmd = subprocess.Popen(args, **kwargs)
-        t = threading.Thread(target=follow, args=(cmd.stdout, zuul_log_id))
-        t.daemon = True
-        t.start()
+        if self.no_log:
+            t = None
+        else:
+            t = threading.Thread(target=follow, args=(cmd.stdout, zuul_log_id))
+            t.daemon = True
+            t.start()
 
         ret = cmd.wait()
 
@@ -381,22 +384,25 @@
         # to catch up and exit.  If it hasn't done so by then, it is very
         # likely stuck in readline() because it spawed a child that is
         # holding stdout or stderr open.
-        t.join(10)
-        with Console(zuul_log_id) as console:
-            if t.isAlive():
-                console.addLine("[Zuul] standard output/error still open "
-                                "after child exited")
-            console.addLine("[Zuul] Task exit code: %s\n" % ret)
+        if t:
+            t.join(10)
+            with Console(zuul_log_id) as console:
+                if t.isAlive():
+                    console.addLine("[Zuul] standard output/error still open "
+                                    "after child exited")
+                console.addLine("[Zuul] Task exit code: %s\n" % ret)
 
-        # ZUUL: If the console log follow thread *is* stuck in readline,
-        # we can't close stdout (attempting to do so raises an
-        # exception) , so this is disabled.
-        # cmd.stdout.close()
-        # cmd.stderr.close()
+            # ZUUL: If the console log follow thread *is* stuck in readline,
+            # we can't close stdout (attempting to do so raises an
+            # exception) , so this is disabled.
+            # cmd.stdout.close()
+            # cmd.stderr.close()
 
-        # ZUUL: stdout and stderr are in the console log file
-        # ZUUL: return the saved log lines so we can ship them back
-        stdout = b('').join(_log_lines)
+            # ZUUL: stdout and stderr are in the console log file
+            # ZUUL: return the saved log lines so we can ship them back
+            stdout = b('').join(_log_lines)
+        else:
+            stdout = b('')
         stderr = b('')
 
         rc = cmd.returncode
diff --git a/zuul/configloader.py b/zuul/configloader.py
index 4f93907..be1bd63 100644
--- a/zuul/configloader.py
+++ b/zuul/configloader.py
@@ -700,10 +700,10 @@
                 (trusted, project) = tenant.getProject(project_name)
                 if project is None:
                     raise Exception("Unknown project %s" % (project_name,))
-                job_project = model.JobProject(project_name,
+                job_project = model.JobProject(project.canonical_name,
                                                project_override_branch,
                                                project_override_checkout)
-                new_projects[project_name] = job_project
+                new_projects[project.canonical_name] = job_project
             job.required_projects = new_projects
 
         tags = conf.get('tags')
diff --git a/zuul/driver/github/githubconnection.py b/zuul/driver/github/githubconnection.py
index b766c6f..02cbfdb 100644
--- a/zuul/driver/github/githubconnection.py
+++ b/zuul/driver/github/githubconnection.py
@@ -343,7 +343,7 @@
         if login:
             # TODO(tobiash): it might be better to plumb in the installation id
             project = body.get('repository', {}).get('full_name')
-            return self.connection.getUser(login, project=project)
+            return self.connection.getUser(login, project)
 
     def run(self):
         while True:
@@ -360,10 +360,11 @@
 class GithubUser(collections.Mapping):
     log = logging.getLogger('zuul.GithubUser')
 
-    def __init__(self, github, username):
-        self._github = github
+    def __init__(self, username, connection, project):
+        self._connection = connection
         self._username = username
         self._data = None
+        self._project = project
 
     def __getitem__(self, key):
         self._init_data()
@@ -379,9 +380,10 @@
 
     def _init_data(self):
         if self._data is None:
-            user = self._github.user(self._username)
+            github = self._connection.getGithubClient(self._project)
+            user = github.user(self._username)
             self.log.debug("Initialized data for user %s", self._username)
-            log_rate_limit(self.log, self._github)
+            log_rate_limit(self.log, github)
             self._data = {
                 'username': user.login,
                 'name': user.name,
@@ -722,10 +724,10 @@
             # installation -- change queues aren't likely to span more
             # than one installation.
             for project in projects:
-                installation_id = self.installation_map.get(project)
+                installation_id = self.installation_map.get(project.name)
                 if installation_id not in installation_ids:
                     installation_ids.add(installation_id)
-                    installation_projects.add(project)
+                    installation_projects.add(project.name)
         else:
             # We aren't in the context of a change queue and we just
             # need to query all installations.  This currently only
@@ -972,8 +974,8 @@
         log_rate_limit(self.log, github)
         return reviews
 
-    def getUser(self, login, project=None):
-        return GithubUser(self.getGithubClient(project), login)
+    def getUser(self, login, project):
+        return GithubUser(login, self, project)
 
     def getUserUri(self, login):
         return 'https://%s/%s' % (self.server, login)
diff --git a/zuul/executor/client.py b/zuul/executor/client.py
index b21a290..d561232 100644
--- a/zuul/executor/client.py
+++ b/zuul/executor/client.py
@@ -262,12 +262,6 @@
                 src_dir=os.path.join('src', p.canonical_name),
                 required=(p in required_projects),
             ))
-        # We are transitioning "projects" from a list to a dict
-        # indexed by canonical name, as it is much easier to access
-        # values in ansible.  Existing callers have been converted to
-        # "_projects" and "projects" is swapped; we will convert users
-        # back to "projects" and remove this soon.
-        zuul_params['_projects'] = zuul_params['projects']
 
         build = Build(job, uuid)
         build.parameters = params
diff --git a/zuul/executor/server.py b/zuul/executor/server.py
index a8ab8c4..d2c04ac 100644
--- a/zuul/executor/server.py
+++ b/zuul/executor/server.py
@@ -18,6 +18,7 @@
 import logging
 import multiprocessing
 import os
+import psutil
 import shutil
 import signal
 import shlex
@@ -713,7 +714,7 @@
                                            project['default_branch'])
             # Update the inventory variables to indicate the ref we
             # checked out
-            p = args['zuul']['_projects'][project['canonical_name']]
+            p = args['zuul']['projects'][project['canonical_name']]
             p['checkout'] = selected
         # Delete the origin remote from each repo we set up since
         # it will not be valid within the jobs.
@@ -1261,6 +1262,9 @@
             config.write('internal_poll_interval = 0.01\n')
 
             config.write('[ssh_connection]\n')
+            # NOTE(pabelanger): Try up to 3 times to run a task on a host, this
+            # helps to mitigate UNREACHABLE host errors with SSH.
+            config.write('retries = 3\n')
             # NB: when setting pipelining = True, keep_remote_files
             # must be False (the default).  Otherwise it apparently
             # will override the pipelining option and effectively
@@ -1949,6 +1953,7 @@
         ''' Apply some heuristics to decide whether or not we should
             be askign for more jobs '''
         load_avg = os.getloadavg()[0]
+        avail_mem_pct = 100.0 - psutil.virtual_memory().percent
         if self.accepting_work:
             # Don't unregister if we don't have any active jobs.
             if load_avg > self.max_load_avg and self.job_workers:
@@ -1956,10 +1961,19 @@
                     "Unregistering due to high system load {} > {}".format(
                         load_avg, self.max_load_avg))
                 self.unregister_work()
-        elif load_avg <= self.max_load_avg:
+            elif avail_mem_pct < self.min_avail_mem:
+                self.log.info(
+                    "Unregistering due to low memory {:3.1f}% < {}".format(
+                        avail_mem_pct, self.min_avail_mem))
+                self.unregister_work()
+        elif (load_avg <= self.max_load_avg and
+                avail_mem_pct >= self.min_avail_mem):
             self.log.info(
-                "Re-registering as load is within limits {} <= {}".format(
-                    load_avg, self.max_load_avg))
+                "Re-registering as job is within limits "
+                "{} <= {} {:3.1f}% <= {}".format(load_avg,
+                                                 self.max_load_avg,
+                                                 avail_mem_pct,
+                                                 self.min_avail_mem))
             self.register_work()
         if self.statsd:
             base_key = 'zuul.executor.%s' % self.hostname
diff --git a/zuul/merger/merger.py b/zuul/merger/merger.py
index bd4ca58..035dbf5 100644
--- a/zuul/merger/merger.py
+++ b/zuul/merger/merger.py
@@ -143,19 +143,30 @@
         self.update()
         repo = self.createRepoObject()
         origin = repo.remotes.origin
+        seen = set()
+        head = None
+        stale_refs = origin.stale_refs
+        # Update our local heads to match the remote, and pick one to
+        # reset the repo to.  We don't delete anything at this point
+        # because we want to make sure the repo is in a state stable
+        # enough for git to operate.
         for ref in origin.refs:
             if ref.remote_head == 'HEAD':
                 continue
+            if ref in stale_refs:
+                continue
             repo.create_head(ref.remote_head, ref, force=True)
-
-        # try reset to remote HEAD (usually origin/master)
-        # If it fails, pick the first reference
-        try:
-            repo.head.reference = origin.refs['HEAD']
-        except IndexError:
-            repo.head.reference = origin.refs[0]
+            seen.add(ref.remote_head)
+            if head is None:
+                head = ref.remote_head
+        self.log.debug("Reset to %s", head)
+        repo.head.reference = head
         reset_repo_to_head(repo)
         repo.git.clean('-x', '-f', '-d')
+        for ref in stale_refs:
+            self.log.debug("Delete stale ref %s", ref.remote_head)
+            repo.delete_head(ref.remote_head, force=True)
+            git.refs.RemoteReference.delete(repo, ref, force=True)
 
     def prune(self):
         repo = self.createRepoObject()
@@ -163,7 +174,7 @@
         stale_refs = origin.stale_refs
         if stale_refs:
             self.log.debug("Pruning stale refs: %s", stale_refs)
-            git.refs.RemoteReference.delete(repo, *stale_refs)
+            git.refs.RemoteReference.delete(repo, force=True, *stale_refs)
 
     def getBranchHead(self, branch):
         repo = self.createRepoObject()
@@ -193,11 +204,12 @@
         return repo.refs
 
     def setRef(self, path, hexsha, repo=None):
+        self.log.debug("Create reference %s at %s in %s",
+                       path, hexsha, self.local_path)
         if repo is None:
             repo = self.createRepoObject()
         binsha = gitdb.util.to_bin_sha(hexsha)
         obj = git.objects.Object.new_from_sha(repo, binsha)
-        self.log.debug("Create reference %s", path)
         git.refs.Reference.create(repo, path, obj, force=True)
 
     def setRefs(self, refs):
diff --git a/zuul/model.py b/zuul/model.py
index 96ec85b..9cfbd0a 100644
--- a/zuul/model.py
+++ b/zuul/model.py
@@ -1060,7 +1060,8 @@
                                         "from other projects."
                                         % (repr(self), this_origin))
                 if k not in set(['pre_run', 'run', 'post_run', 'roles',
-                                 'variables', 'required_projects']):
+                                 'variables', 'required_projects',
+                                 'allowed_projects']):
                     # TODO(jeblair): determine if deepcopy is required
                     setattr(self, k, copy.deepcopy(other._get(k)))
 
@@ -1097,6 +1098,12 @@
             self.updateVariables(other.variables)
         if other._get('required_projects') is not None:
             self.updateProjects(other.required_projects)
+        if (other._get('allowed_projects') is not None and
+            self._get('allowed_projects') is not None):
+            self.allowed_projects = self.allowed_projects.intersection(
+                other.allowed_projects)
+        elif other._get('allowed_projects') is not None:
+            self.allowed_projects = copy.deepcopy(other.allowed_projects)
 
         for k in self.context_attributes:
             if (other._get(k) is not None and
@@ -1108,8 +1115,18 @@
 
         self.inheritance_path = self.inheritance_path + (repr(other),)
 
-    def changeMatches(self, change):
-        if self.branch_matcher and not self.branch_matcher.matches(change):
+    def changeMatches(self, change, override_branch=None):
+        if override_branch is None:
+            branch_change = change
+        else:
+            # If an override branch is supplied, create a very basic
+            # change (a Ref) and set its branch to the override
+            # branch.
+            branch_change = Ref(change.project)
+            branch_change.ref = override_branch
+
+        if self.branch_matcher and not self.branch_matcher.matches(
+                branch_change):
             return False
 
         if self.file_matcher and not self.file_matcher.matches(change):
@@ -2071,9 +2088,6 @@
     def isUpdateOf(self, other):
         return False
 
-    def filterJobs(self, jobs):
-        return filter(lambda job: job.changeMatches(self), jobs)
-
     def getRelatedChanges(self):
         return set()
 
@@ -2668,21 +2682,33 @@
     def addProjectConfig(self, project_config):
         self.project_configs[project_config.name] = project_config
 
-    def collectJobs(self, item, jobname, change, path=None, jobs=None,
-                    stack=None):
-        if stack is None:
-            stack = []
-        if jobs is None:
-            jobs = []
-        if path is None:
-            path = []
-        path.append(jobname)
+    def _updateOverrideCheckouts(self, override_checkouts, job):
+        # Update the values in an override_checkouts dict with those
+        # in a job.  Used in collectJobVariants.
+        if job.override_checkout:
+            override_checkouts[None] = job.override_checkout
+        for req in job.required_projects.values():
+            if req.override_checkout:
+                override_checkouts[req.project_name] = req.override_checkout
+
+    def _collectJobVariants(self, item, jobname, change, path, jobs, stack,
+                            override_checkouts, indent):
         matched = False
-        indent = len(path) + 1
-        item.debug("Collecting job variants for {jobname}".format(
-            jobname=jobname), indent=indent)
+        local_override_checkouts = override_checkouts.copy()
+        override_branch = None
+        project = None
         for variant in self.getJobs(jobname):
-            if not variant.changeMatches(change):
+            if project is None and variant.source_context:
+                project = variant.source_context.project
+                if override_checkouts.get(None) is not None:
+                    override_branch = override_checkouts.get(None)
+                override_branch = override_checkouts.get(
+                    project.canonical_name, override_branch)
+                branches = self.tenant.getProjectBranches(project)
+                if override_branch not in branches:
+                    override_branch = None
+            if not variant.changeMatches(change,
+                                         override_branch=override_branch):
                 self.log.debug("Variant %s did not match %s", repr(variant),
                                change)
                 item.debug("Variant {variant} did not match".format(
@@ -2698,17 +2724,53 @@
                     parent = self.tenant.default_base_job
             else:
                 parent = None
+            self._updateOverrideCheckouts(local_override_checkouts, variant)
             if parent and parent not in path:
                 if parent in stack:
                     raise Exception("Dependency cycle in jobs: %s" % stack)
                 self.collectJobs(item, parent, change, path, jobs,
-                                 stack + [jobname])
+                                 stack + [jobname], local_override_checkouts)
             matched = True
-            jobs.append(variant)
+            if variant not in jobs:
+                jobs.append(variant)
+        return matched
+
+    def collectJobs(self, item, jobname, change, path=None, jobs=None,
+                    stack=None, override_checkouts=None):
+        # Stack is the recursion stack of job parent names.  Each time
+        # we go up a level, we add to stack, and it's popped as we
+        # descend.
+        if stack is None:
+            stack = []
+        # Jobs is the list of jobs we've accumulated.
+        if jobs is None:
+            jobs = []
+        # Path is the list of job names we've examined.  It
+        # accumulates and never reduces.  If more than one job has the
+        # same parent, this will prevent us from adding it a second
+        # time.
+        if path is None:
+            path = []
+        # Override_checkouts is a dictionary of canonical project
+        # names -> branch names.  It is not mutated, but instead new
+        # copies are made and updated as we ascend the hierarchy, so
+        # higher levels don't affect lower levels after we descend.
+        # It's used to override the branch matchers for jobs.
+        if override_checkouts is None:
+            override_checkouts = {}
+        path.append(jobname)
+        matched = False
+        indent = len(path) + 1
+        msg = "Collecting job variants for {jobname}".format(jobname=jobname)
+        self.log.debug(msg)
+        item.debug(msg, indent=indent)
+        matched = self._collectJobVariants(
+            item, jobname, change, path, jobs, stack, override_checkouts,
+            indent)
         if not matched:
             self.log.debug("No matching parents for job %s and change %s",
                            jobname, change)
-            item.debug("No matching parent for {jobname}".format(
+            item.debug("No matching parents for {jobname}".format(
                 jobname=repr(jobname)), indent=indent)
             raise NoMatchingParentError()
         return jobs
@@ -2723,8 +2785,17 @@
             self.log.debug("Collecting jobs %s for %s", jobname, change)
             item.debug("Freezing job {jobname}".format(
                 jobname=jobname), indent=1)
+            # Create the initial list of override_checkouts, which are
+            # used as we walk up the hierarchy to expand the set of
+            # jobs which match.
+            override_checkouts = {}
+            for variant in job_list.jobs[jobname]:
+                if variant.changeMatches(change):
+                    self._updateOverrideCheckouts(override_checkouts, variant)
             try:
-                variants = self.collectJobs(item, jobname, change)
+                variants = self.collectJobs(
+                    item, jobname, change,
+                    override_checkouts=override_checkouts)
             except NoMatchingParentError:
                 variants = None
             if not variants:
@@ -2764,7 +2835,7 @@
                 item.debug("No matching pipeline variants for {jobname}".
                            format(jobname=jobname), indent=2)
                 continue
-            if (frozen_job.allowed_projects and
+            if (frozen_job.allowed_projects is not None and
                 change.project.name not in frozen_job.allowed_projects):
                 raise Exception("Project %s is not allowed to run job %s" %
                                 (change.project.name, frozen_job.name))