Monitor job root and kill over limit jobs

If a job were to be pointed at an abnormally large git repository (or
a maliciously large one), a clone would fill the disk. Or anything else
that might happen that writes data onto the executor disk.

We run a single thread periodically running du on the root of all jobs
on this executor. This is called the DiskAccountant.

We set a config item per executor of the limit per job. This won't
actually save a server from a full disk if many thousands of concurrent
changes are submitted, but this will prevent any accidental filling of
the disk, and make malicious disk filling much harder.

We also ignore hard links from the merge root, which will exempt bits
cloned from the merge root from disk accounting.

Change-Id: I415e5930cc3ebe2c7e1a84316e78578d6b9ecf30
Story: 2000879
Task: 3504
diff --git a/tests/unit/test_v3.py b/tests/unit/test_v3.py
index 7dcb4ae..3d8d37c 100644
--- a/tests/unit/test_v3.py
+++ b/tests/unit/test_v3.py
@@ -924,3 +924,15 @@
         self.assertIn('- data-return-relative '
                       'http://example.com/test/log/url/docs/index.html',
                       A.messages[-1])
+
+
+class TestDiskAccounting(AnsibleZuulTestCase):
+    config_file = 'zuul-disk-accounting.conf'
+    tenant_config_file = 'config/disk-accountant/main.yaml'
+
+    def test_disk_accountant_kills_job(self):
+        A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A')
+        self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
+        self.waitUntilSettled()
+        self.assertHistory([
+            dict(name='dd-big-empty-file', result='ABORTED', changes='1,1')])