Remove nodesets from builds canceled during reconfiguration

We observed errant behavior in the configuration covered by
test_reconfigure_window_shrink in production when a reconfiguration
shrunk the active window to less than the current value when
jobs had already completed.

Correct the underlying issue by removing the nodeset associated with
a build from the buildset when the reconfiguration routine cancels it.
Then, if we later launch the same job for some reason, we will obtain
a new nodeset.

If the build is running at the time it's canceled, we will still need
the scheduler to return the nodeset to nodepool.  Since it currently
relies on the value in the buildset to find the nodeset, attach the
nodeset to the build directly, so that even if we have removed the
nodeset from the buildset, the scheduler will still have a pointer
to the nodeset when the build completes.

Having said all that, we really don't want to waste resources by
shrinking the window on reconfiguration.  A future change is likely
to correct that and very likely invalidate the test just added.  The
only other time a build is likely to be canceled during reconfiguration
yet used again later is if a job is removed, then added back to a
project while changes are in the queue.  So that we continue to have
a test which covers this case, add a second test based on that scenario.

Both of these tests fail without the included fix.

Change-Id: If61b34e0f1464cb69d9d0b9053e05f1af996a67b
diff --git a/zuul/scheduler.py b/zuul/scheduler.py
index ed7d64b..846242c 100644
--- a/zuul/scheduler.py
+++ b/zuul/scheduler.py
@@ -650,6 +650,15 @@
                     self.log.exception(
                         "Exception while canceling build %s "
                         "for change %s" % (build, build.build_set.item.change))
+                # In the unlikely case that a build is removed and
+                # later added back, make sure we clear out the nodeset
+                # so it gets requested again.
+                try:
+                    build.build_set.removeJobNodeSet(build.job.name)
+                except Exception:
+                    self.log.exception(
+                        "Exception while removing nodeset from build %s "
+                        "for change %s" % (build, build.build_set.item.change))
                 finally:
                     tenant.semaphore_handler.release(
                         build.build_set.item, build.job)
@@ -920,7 +929,7 @@
         # to pass this on to the pipeline manager, make sure we return
         # the nodes to nodepool.
         try:
-            nodeset = build.build_set.getJobNodeSet(build.job.name)
+            nodeset = build.nodeset
             autohold_key = (build.pipeline.layout.tenant.name,
                             build.build_set.item.change.project.canonical_name,
                             build.job.name)