Do late decoding of log stream buffer

The log stream is read in chunked blocks. When having multi byte
unicode characters in the log stream it can happen that this character
is split into different buffers. This can break the decode step with
an exception [1]. This can be fixed by treating the buffer as binary
and decoding the final lines.

Further we must expect that the data also contains binary data. In
order to cope with this further harden the final decoding by adding
'backslashreplace'. This will replace every occurrence of an
undecodable character by an appropriate escape sequence. This way we
can retain all the information (even binary) without being unable to
decode the stream.

[1]: Log output
Ansible output: b'Exception in thread Thread-10:'
Ansible output: b'Traceback (most recent call last):'
Ansible output: b'  File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner'
Ansible output: b'    self.run()'
Ansible output: b'  File "/usr/lib/python3.5/threading.py", line 862, in run'
Ansible output: b'    self._target(*self._args, **self._kwargs)'
Ansible output: b'  File "/var/lib/zuul/ansible/zuul/ansible/callback/zuul_stream.py", line 140, in _read_log'
Ansible output: b'    more = s.recv(4096).decode("utf-8")'
Ansible output: b"UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 4094-4095: unexpected end of data"
Ansible output: b''

Change-Id: I568ede2a2a4a64fd3a98480cebcbc2e86c54a2cf
3 files changed