Merge pull request #2229 from enkore/f/jsonp

JSON progress indicators
This commit is contained in:
enkore 2017-03-03 14:39:28 +01:00 committed by GitHub
commit 6468b8a0c2
10 changed files with 442 additions and 48 deletions

View file

@ -1,5 +1,5 @@
.. include:: ../global.rst.inc
.. highlight:: none
.. highlight:: json
.. _json_output:
@ -22,7 +22,8 @@ where each line is a JSON object. The *type* key of the object determines its ot
Since JSON can only encode text, any string representing a file system path may miss non-text parts.
The following types are in use:
The following types are in use. Progress information is governed by the usual rules for progress information,
it is not produced unless ``--progress`` is specified.
archive_progress
Output during operations creating archives (:ref:`borg_create` and :ref:`borg_recreate`).
@ -39,6 +40,39 @@ archive_progress
path
Current path
progress_message
A message-based progress information with no concrete progress information, just a message
saying what is currently being worked on.
operation
unique, opaque integer ID of the operation
:ref:`msgid <msgid>`
Message ID of the operation (may be *none*)
finished
boolean indicating whether the operation has finished, only the last object for an *operation*
can have this property set to *true*.
message
current progress message (may be empty/absent)
progress_percent
Absolute progress information with defined end/total and current value.
operation
unique, opaque integer ID of the operation
:ref:`msgid <msgid>`
Message ID of the operation (may be *none*)
finished
boolean indicating whether the operation has finished, only the last object for an *operation*
can have this property set to *true*.
message
A formatted progress message, this will include the percentage and perhaps other information
current
Current value (always less-or-equal to *total*)
info
Array that describes the current item, may be *none*, contents depend on *msgid*
total
Total value
file_status
This is only output by :ref:`borg_create` and :ref:`borg_recreate` if ``--list`` is specified. The usual
rules for the file listing applies, including the ``--filter`` option.
@ -54,11 +88,46 @@ log_message
created
Unix timestamp (float)
levelname
Upper-case log level name (also called severity). Defined levels are: DEBUG, INFO, WARNING, CRITICAL
Upper-case log level name (also called severity). Defined levels are: DEBUG, INFO, WARNING, ERROR, CRITICAL
name
Name of the emitting entity
message
Formatted log message
:ref:`msgid <msgid>`
Message ID, may be *none* or absent
.. rubric:: Examples (reformatted, each object would be on exactly one line)
:ref:`borg_extract` progress::
{"message": "100.0% Extracting: src/borgbackup.egg-info/entry_points.txt",
"current": 13000228, "total": 13004993, "info": ["src/borgbackup.egg-info/entry_points.txt"],
"operation": 1, "msgid": "extract", "type": "progress_percent", "finished": false}
{"message": "100.0% Extracting: src/borgbackup.egg-info/SOURCES.txt",
"current": 13004993, "total": 13004993, "info": ["src/borgbackup.egg-info/SOURCES.txt"],
"operation": 1, "msgid": "extract", "type": "progress_percent", "finished": false}
{"operation": 1, "msgid": "extract", "type": "progress_percent", "finished": true}
:ref:`borg_create` file listing with progress::
{"original_size": 0, "compressed_size": 0, "deduplicated_size": 0, "nfiles": 0, "type": "archive_progress", "path": "src"}
{"type": "file_status", "status": "U", "path": "src/borgbackup.egg-info/entry_points.txt"}
{"type": "file_status", "status": "U", "path": "src/borgbackup.egg-info/SOURCES.txt"}
{"type": "file_status", "status": "d", "path": "src/borgbackup.egg-info"}
{"type": "file_status", "status": "d", "path": "src"}
{"original_size": 13176040, "compressed_size": 11386863, "deduplicated_size": 503, "nfiles": 277, "type": "archive_progress", "path": ""}
Internal transaction progress::
{"message": "Saving files cache", "operation": 2, "msgid": "cache.commit", "type": "progress_message", "finished": false}
{"message": "Saving cache config", "operation": 2, "msgid": "cache.commit", "type": "progress_message", "finished": false}
{"message": "Saving chunks cache", "operation": 2, "msgid": "cache.commit", "type": "progress_message", "finished": false}
{"operation": 2, "msgid": "cache.commit", "type": "progress_message", "finished": true}
A debug log message::
{"message": "35 self tests completed in 0.08 seconds",
"type": "log_message", "created": 1488278449.5575905, "levelname": "DEBUG", "name": "borg.archiver"}
Standard output
---------------
@ -105,6 +174,31 @@ stats
unique_csize
Compressed and encrypted size of all chunks
Example *borg info* output::
{
"cache": {
"path": "/home/user/.cache/borg/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"stats": {
"total_chunks": 511533,
"total_csize": 17948017540,
"total_size": 22635749792,
"total_unique_chunks": 54892,
"unique_csize": 1920405405,
"unique_size": 2449675468
}
},
"encryption": {
"mode": "repokey"
},
"repository": {
"id": "0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"last_modified": "Mon, 2017-02-27 21:21:58",
"location": "/home/user/testrepo"
},
"security_dir": "/home/user/.config/borg/security/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23"
}
.. rubric:: Archive formats
:ref:`borg_info` uses an extended format for archives, which is more expensive to retrieve, while
@ -112,12 +206,11 @@ stats
array under the *archives* key, while :ref:`borg_create` returns a single archive object under the
*archive* key.
Both formats contain a *name* key with the archive name, and the *id* key with the hexadecimal archive ID.
Both formats contain a *name* key with the archive name, the *id* key with the hexadecimal archive ID,
and the *start* key with the start timestamp.
info and create further have:
*borg info* and *borg create* further have:
start
Start timestamp
end
End timestamp
duration
@ -152,8 +245,219 @@ username
comment
Archive comment, if any
Example of a simple archive listing (``borg list --last 1 --json``)::
{
"archives": [
{
"id": "80cd07219ad725b3c5f665c1dcf119435c4dee1647a560ecac30f8d40221a46a",
"name": "host-system-backup-2017-02-27",
"start": "Mon, 2017-02-27 21:21:52"
}
],
"encryption": {
"mode": "repokey"
},
"repository": {
"id": "0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"last_modified": "Mon, 2017-02-27 21:21:58",
"location": "/home/user/repository"
}
}
The same archive with more information (``borg info --last 1 --json``)::
{
"archives": [
{
"command_line": [
"/home/user/.local/bin/borg",
"create",
"/home/user/repository",
"..."
],
"comment": "",
"duration": 5.641542,
"end": "Mon, 2017-02-27 21:21:58",
"hostname": "host",
"id": "80cd07219ad725b3c5f665c1dcf119435c4dee1647a560ecac30f8d40221a46a",
"limits": {
"max_archive_size": 0.0001330855110409714
},
"name": "host-system-backup-2017-02-27",
"start": "Mon, 2017-02-27 21:21:52",
"stats": {
"compressed_size": 1880961894,
"deduplicated_size": 2791,
"nfiles": 53669,
"original_size": 2400471280
},
"username": "user"
}
],
"cache": {
"path": "/home/user/.cache/borg/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"stats": {
"total_chunks": 511533,
"total_csize": 17948017540,
"total_size": 22635749792,
"total_unique_chunks": 54892,
"unique_csize": 1920405405,
"unique_size": 2449675468
}
},
"encryption": {
"mode": "repokey"
},
"repository": {
"id": "0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"last_modified": "Mon, 2017-02-27 21:21:58",
"location": "/home/user/repository"
}
}
.. rubric:: File listings
Listing the contents of an archive can produce *a lot* of JSON. Each item (file, directory, ...) is described
by one object in the *files* array of the :ref:`borg_list` output. Refer to the *borg list* documentation for
by one object in the *items* array of the :ref:`borg_list` output. Refer to the *borg list* documentation for
the available keys and their meaning.
Example (excerpt)::
{
"encryption": {
"mode": "repokey"
},
"repository": {
"id": "0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"last_modified": "Mon, 2017-02-27 21:21:58",
"location": "/home/user/repository"
},
"items": [
{
"type": "d",
"mode": "drwxr-xr-x",
"user": "user",
"group": "user",
"uid": 1000,
"gid": 1000,
"path": "linux",
"healthy": true,
"source": "",
"linktarget": "",
"flags": null,
"isomtime": "Sat, 2016-05-07 19:46:01",
"size": 0
}
]
}
.. _msgid:
Message IDs
-----------
Message IDs are strings that essentially give a log message or operation a name, without actually using the
full text, since texts change more frequently. Message IDs are unambiguous and reduce the need to parse
log messages.
Assigned message IDs are:
.. See scripts/errorlist.py; this is slightly edited.
Errors
Archive.AlreadyExists
Archive {} already exists
Archive.DoesNotExist
Archive {} does not exist
Archive.IncompatibleFilesystemEncodingError
Failed to encode filename "{}" into file system encoding "{}". Consider configuring the LANG environment variable.
Cache.CacheInitAbortedError
Cache initialization aborted
Cache.EncryptionMethodMismatch
Repository encryption method changed since last access, refusing to continue
Cache.RepositoryAccessAborted
Repository access aborted
Cache.RepositoryIDNotUnique
Cache is newer than repository - do you have multiple, independently updated repos with same ID?
Cache.RepositoryReplay
Cache is newer than repository - this is either an attack or unsafe (multiple repos with same ID)
Buffer.MemoryLimitExceeded
Requested buffer size {} is above the limit of {}.
ExtensionModuleError
The Borg binary extension modules do not seem to be properly installed
IntegrityError
Data integrity error: {}
NoManifestError
Repository has no manifest.
PlaceholderError
Formatting Error: "{}".format({}): {}({})
KeyfileInvalidError
Invalid key file for repository {} found in {}.
KeyfileMismatchError
Mismatch between repository {} and key file {}.
KeyfileNotFoundError
No key file for repository {} found in {}.
PassphraseWrong
passphrase supplied in BORG_PASSPHRASE is incorrect
PasswordRetriesExceeded
exceeded the maximum password retries
RepoKeyNotFoundError
No key entry found in the config of repository {}.
UnsupportedManifestError
Unsupported manifest envelope. A newer version is required to access this repository.
UnsupportedPayloadError
Unsupported payload type {}. A newer version is required to access this repository.
NotABorgKeyFile
This file is not a borg key backup, aborting.
RepoIdMismatch
This key backup seems to be for a different backup repository, aborting.
UnencryptedRepo
Keymanagement not available for unencrypted repositories.
UnknownKeyType
Keytype {0} is unknown.
LockError
Failed to acquire the lock {}.
LockErrorT
Failed to acquire the lock {}.
ConnectionClosed
Connection closed by remote host
InvalidRPCMethod
RPC method {} is not valid
PathNotAllowed
Repository path not allowed
RemoteRepository.RPCServerOutdated
Borg server is too old for {}. Required version {}
UnexpectedRPCDataFormatFromClient
Borg {}: Got unexpected RPC data format from client.
UnexpectedRPCDataFormatFromServer
Got unexpected RPC data format from server:
{}
Repository.AlreadyExists
Repository {} already exists.
Repository.CheckNeeded
Inconsistency detected. Please run "borg check {}".
Repository.DoesNotExist
Repository {} does not exist.
Repository.InsufficientFreeSpaceError
Insufficient free space to complete transaction (required: {}, available: {}).
Repository.InvalidRepository
{} is not a valid repository. Check repo config.
Repository.ObjectNotFound
Object with key {} not found in repository {}.
Operations
- cache.begin_transaction
- cache.commit
- cache.sync
*info* is one string element, the name of the archive currently synced.
- repository.compact_segments
- repository.replay_segments
- repository.check_segments
- check.verify_data
- extract
*info* is one string element, the name of the path currently extracted.
- extract.permissions
- archive.delete

14
scripts/errorlist.py Executable file
View file

@ -0,0 +1,14 @@
#!/usr/bin/env python3
from textwrap import indent
import borg.archiver
from borg.helpers import Error, ErrorWithTraceback
classes = Error.__subclasses__() + ErrorWithTraceback.__subclasses__()
for cls in sorted(classes, key=lambda cls: (cls.__module__, cls.__qualname__)):
if cls is ErrorWithTraceback:
continue
print(' ', cls.__qualname__)
print(indent(cls.__doc__, ' ' * 8))

View file

@ -734,7 +734,7 @@ Utilization of max. archive size: {csize_max:.0%}
try:
unpacker = msgpack.Unpacker(use_list=False)
items_ids = self.metadata.items
pi = ProgressIndicatorPercent(total=len(items_ids), msg="Decrementing references %3.0f%%")
pi = ProgressIndicatorPercent(total=len(items_ids), msg="Decrementing references %3.0f%%", msgid='archive.delete')
for (i, (items_id, data)) in enumerate(zip(items_ids, self.repository.get_many(items_ids))):
if progress:
pi.show(i)
@ -1153,7 +1153,8 @@ class ArchiveChecker:
chunks_count_segments = 0
errors = 0
defect_chunks = []
pi = ProgressIndicatorPercent(total=chunks_count_index, msg="Verifying data %6.2f%%", step=0.01)
pi = ProgressIndicatorPercent(total=chunks_count_index, msg="Verifying data %6.2f%%", step=0.01,
msgid='check.verify_data')
marker = None
while True:
chunk_ids = self.repository.scan(limit=100, marker=marker)

View file

@ -623,7 +623,7 @@ class Archiver:
filter = self.build_filter(matcher, peek_and_store_hardlink_masters, strip_components)
if progress:
pi = ProgressIndicatorPercent(msg='%5.1f%% Extracting: %s', step=0.1)
pi = ProgressIndicatorPercent(msg='%5.1f%% Extracting: %s', step=0.1, msgid='extract')
pi.output('Calculating size')
extracted_size = sum(item.get_size(hardlink_masters) for item in archive.iter_items(filter))
pi.total = extracted_size
@ -656,8 +656,12 @@ class Archiver:
except BackupOSError as e:
self.print_warning('%s: %s', remove_surrogates(orig_path), e)
if pi:
pi.finish()
if not args.dry_run:
pi = ProgressIndicatorPercent(total=len(dirs), msg='Setting directory permissions %3.0f%%')
pi = ProgressIndicatorPercent(total=len(dirs), msg='Setting directory permissions %3.0f%%',
msgid='extract.permissions')
while dirs:
pi.show()
dir_item = dirs.pop(-1)
@ -3426,7 +3430,7 @@ def main(): # pragma: no cover
signal_handler('SIGUSR2', sig_trace_handler), \
signal_handler('SIGINFO', sig_info_handler):
archiver = Archiver()
msg = tb = None
msg = msgid = tb = None
tb_log_level = logging.ERROR
try:
args = archiver.get_args(sys.argv, os.environ.get('SSH_ORIGINAL_COMMAND'))
@ -3443,11 +3447,13 @@ def main(): # pragma: no cover
exit_code = archiver.run(args)
except Error as e:
msg = e.get_message()
msgid = type(e).__qualname__
tb_log_level = logging.ERROR if e.traceback else logging.DEBUG
tb = "%s\n%s" % (traceback.format_exc(), sysinfo())
exit_code = e.exit_code
except RemoteRepository.RPCError as e:
important = e.exception_class not in ('LockTimeout', )
msgid = e.exception_class
tb_log_level = logging.ERROR if important else logging.DEBUG
if important:
msg = e.exception_full
@ -3458,6 +3464,7 @@ def main(): # pragma: no cover
exit_code = EXIT_ERROR
except Exception:
msg = 'Local Exception'
msgid = 'Exception'
tb_log_level = logging.ERROR
tb = '%s\n%s' % (traceback.format_exc(), sysinfo())
exit_code = EXIT_ERROR
@ -3468,14 +3475,16 @@ def main(): # pragma: no cover
exit_code = EXIT_ERROR
except SigTerm:
msg = 'Received SIGTERM'
msgid = 'Signal.SIGTERM'
tb_log_level = logging.DEBUG
tb = '%s\n%s' % (traceback.format_exc(), sysinfo())
exit_code = EXIT_ERROR
except SigHup:
msg = 'Received SIGHUP.'
msgid = 'Signal.SIGHUP'
exit_code = EXIT_ERROR
if msg:
logger.error(msg)
logger.error(msg, msgid=msgid)
if tb:
logger.log(tb_log_level, tb)
if args.show_rc:

View file

@ -320,7 +320,7 @@ Chunk index: {0.total_unique_chunks:20d} {0.total_chunks:20d}"""
def begin_txn(self):
# Initialize transaction snapshot
pi = ProgressIndicatorMessage()
pi = ProgressIndicatorMessage(msgid='cache.begin_transaction')
txn_dir = os.path.join(self.path, 'txn.tmp')
os.mkdir(txn_dir)
pi.output('Initializing cache transaction: Reading config')
@ -340,7 +340,7 @@ Chunk index: {0.total_unique_chunks:20d} {0.total_chunks:20d}"""
if not self.txn_active:
return
self.security_manager.save(self.manifest, self.key, self)
pi = ProgressIndicatorMessage()
pi = ProgressIndicatorMessage(msgid='cache.commit')
if self.files is not None:
if self._newest_mtime is None:
# was never set because no files were modified/added
@ -468,7 +468,8 @@ Chunk index: {0.total_unique_chunks:20d} {0.total_chunks:20d}"""
chunk_idx = None
if self.progress:
pi = ProgressIndicatorPercent(total=len(archive_ids), step=0.1,
msg='%3.0f%% Syncing chunks cache. Processing archive %s')
msg='%3.0f%% Syncing chunks cache. Processing archive %s',
msgid='cache.sync')
for archive_id in archive_ids:
archive_name = lookup_name(archive_id)
if self.progress:

View file

@ -1377,10 +1377,22 @@ def ellipsis_truncate(msg, space):
class ProgressIndicatorBase:
LOGGER = 'borg.output.progress'
JSON_TYPE = None
json = False
def __init__(self):
operation_id_counter = 0
@classmethod
def operation_id(cls):
"""Unique number, can be used by receiving applications to distinguish different operations."""
cls.operation_id_counter += 1
return cls.operation_id_counter
def __init__(self, msgid=None):
self.handler = None
self.logger = logging.getLogger(self.LOGGER)
self.id = self.operation_id()
self.msgid = msgid
# If there are no handlers, set one up explicitly because the
# terminator and propagation needs to be set. If there are,
@ -1394,6 +1406,7 @@ class ProgressIndicatorBase:
try:
formatter = logger.formatter
terminator = '\n' if logger.json else '\r'
self.json = logger.json
except AttributeError:
terminator = '\r'
else:
@ -1404,12 +1417,31 @@ class ProgressIndicatorBase:
if self.logger.level == logging.NOTSET:
self.logger.setLevel(logging.WARN)
self.logger.propagate = False
self.emit = self.logger.getEffectiveLevel() == logging.INFO
def __del__(self):
if self.handler is not None:
self.logger.removeHandler(self.handler)
self.handler.close()
def output_json(self, *, finished=False, **kwargs):
assert self.json
if not self.emit:
return
kwargs.update(dict(
operation=self.id,
msgid=self.msgid,
type=self.JSON_TYPE,
finished=finished
))
print(json.dumps(kwargs), file=sys.stderr)
def finish(self):
if self.json:
self.output_json(finished=True)
else:
self.output('')
def justify_to_terminal_size(message):
terminal_space = get_terminal_size(fallback=(-1, -1))[0]
@ -1420,15 +1452,19 @@ def justify_to_terminal_size(message):
class ProgressIndicatorMessage(ProgressIndicatorBase):
def output(self, msg):
self.logger.info(justify_to_terminal_size(msg))
JSON_TYPE = 'progress_message'
def finish(self):
self.output('')
def output(self, msg):
if self.json:
self.output_json(message=msg)
else:
self.logger.info(justify_to_terminal_size(msg))
class ProgressIndicatorPercent(ProgressIndicatorBase):
def __init__(self, total=0, step=5, start=0, msg="%3.0f%%"):
JSON_TYPE = 'progress_percent'
def __init__(self, total=0, step=5, start=0, msg="%3.0f%%", msgid=None):
"""
Percentage-based progress indicator
@ -1443,7 +1479,7 @@ class ProgressIndicatorPercent(ProgressIndicatorBase):
self.step = step
self.msg = msg
super().__init__()
super().__init__(msgid=msgid)
def progress(self, current=None, increase=1):
if current is not None:
@ -1466,22 +1502,23 @@ class ProgressIndicatorPercent(ProgressIndicatorBase):
if pct is not None:
# truncate the last argument, if no space is available
if info is not None:
# no need to truncate if we're not outputing to a terminal
terminal_space = get_terminal_size(fallback=(-1, -1))[0]
if terminal_space != -1:
space = terminal_space - len(self.msg % tuple([pct] + info[:-1] + ['']))
info[-1] = ellipsis_truncate(info[-1], space)
return self.output(self.msg % tuple([pct] + info), justify=False)
if not self.json:
# no need to truncate if we're not outputing to a terminal
terminal_space = get_terminal_size(fallback=(-1, -1))[0]
if terminal_space != -1:
space = terminal_space - len(self.msg % tuple([pct] + info[:-1] + ['']))
info[-1] = ellipsis_truncate(info[-1], space)
return self.output(self.msg % tuple([pct] + info), justify=False, info=info)
return self.output(self.msg % pct)
def output(self, message, justify=True):
if justify:
message = justify_to_terminal_size(message)
self.logger.info(message)
def finish(self):
self.output('')
def output(self, message, justify=True, info=None):
if self.json:
self.output_json(message=message, current=self.counter, total=self.total, info=info)
else:
if justify:
message = justify_to_terminal_size(message)
self.logger.info(message)
class ProgressIndicatorEndless:
@ -1580,11 +1617,14 @@ class ArchiveFormatter(BaseFormatter):
def get_item_data(self, archive):
return {
# *name* is the key used by borg-info for the archive name, this makes the formats more compatible
'name': remove_surrogates(archive.name),
'barchive': archive.name,
'archive': remove_surrogates(archive.name),
'id': bin_to_hex(archive.id),
'time': format_time(to_localtime(archive.ts)),
# *start* is the key used by borg-info for this timestamp, this makes the formats more compatible
'start': format_time(to_localtime(archive.ts)),
}
@staticmethod
@ -1697,7 +1737,7 @@ class ItemFormatter(BaseFormatter):
begin = json_dump(basic_json_data(self.archive.manifest))
begin, _, _ = begin.rpartition('\n}') # remove last closing brace, we want to extend the object
begin += ',\n'
begin += ' "files": [\n'
begin += ' "items": [\n'
return begin
def end(self):

View file

@ -92,7 +92,7 @@ def setup_logging(stream=None, conf_fname=None, env_var='BORG_LOGGING_CONF', lev
fmt = '$LOG %(levelname)s %(name)s Remote: %(message)s'
else:
fmt = '%(message)s'
formatter = JsonFormatter(fmt) if json else logging.Formatter(fmt)
formatter = JsonFormatter(fmt) if json and not is_serve else logging.Formatter(fmt)
handler.setFormatter(formatter)
borg_logger = logging.getLogger('borg')
borg_logger.formatter = formatter
@ -165,24 +165,38 @@ def create_logger(name=None):
return self.__logger.setLevel(*args, **kw)
def log(self, *args, **kw):
if 'msgid' in kw:
kw.setdefault('extra', {})['msgid'] = kw.pop('msgid')
return self.__logger.log(*args, **kw)
def exception(self, *args, **kw):
if 'msgid' in kw:
kw.setdefault('extra', {})['msgid'] = kw.pop('msgid')
return self.__logger.exception(*args, **kw)
def debug(self, *args, **kw):
if 'msgid' in kw:
kw.setdefault('extra', {})['msgid'] = kw.pop('msgid')
return self.__logger.debug(*args, **kw)
def info(self, *args, **kw):
if 'msgid' in kw:
kw.setdefault('extra', {})['msgid'] = kw.pop('msgid')
return self.__logger.info(*args, **kw)
def warning(self, *args, **kw):
if 'msgid' in kw:
kw.setdefault('extra', {})['msgid'] = kw.pop('msgid')
return self.__logger.warning(*args, **kw)
def error(self, *args, **kw):
if 'msgid' in kw:
kw.setdefault('extra', {})['msgid'] = kw.pop('msgid')
return self.__logger.error(*args, **kw)
def critical(self, *args, **kw):
if 'msgid' in kw:
kw.setdefault('extra', {})['msgid'] = kw.pop('msgid')
return self.__logger.critical(*args, **kw)
return LazyLogger(name)
@ -194,6 +208,8 @@ class JsonFormatter(logging.Formatter):
'levelname',
'name',
'message',
# msgid is an attribute we made up in Borg to expose a non-changing handle for log messages
'msgid',
)
# Other attributes that are not very useful but do exist:

View file

@ -636,6 +636,12 @@ This problem will go away as soon as the server has been upgraded to 1.0.7+.
opts.append('--critical')
else:
raise ValueError('log level missing, fix this code')
try:
borg_logger = logging.getLogger('borg')
if borg_logger.json:
opts.append('--log-json')
except AttributeError:
pass
env_vars = []
if yes(env_var_override='BORG_HOSTNAME_IS_UNIQUE', env_msg=None, prompt=False):
env_vars.append('BORG_HOSTNAME_IS_UNIQUE=yes')

View file

@ -483,7 +483,8 @@ class Repository:
unused = []
logger.debug('compaction started.')
pi = ProgressIndicatorPercent(total=len(self.compact), msg='Compacting segments %3.0f%%', step=1)
pi = ProgressIndicatorPercent(total=len(self.compact), msg='Compacting segments %3.0f%%', step=1,
msgid='repository.compact_segments')
for segment, freeable_space in sorted(self.compact.items()):
if not self.io.segment_exists(segment):
logger.warning('segment %d not found, but listed in compaction data', segment)
@ -584,7 +585,8 @@ class Repository:
self.prepare_txn(index_transaction_id, do_cleanup=False)
try:
segment_count = sum(1 for _ in self.io.segment_iterator())
pi = ProgressIndicatorPercent(total=segment_count, msg="Replaying segments %3.0f%%")
pi = ProgressIndicatorPercent(total=segment_count, msg='Replaying segments %3.0f%%',
msgid='repository.replay_segments')
for i, (segment, filename) in enumerate(self.io.segment_iterator()):
pi.show(i)
if index_transaction_id is not None and segment <= index_transaction_id:
@ -694,7 +696,8 @@ class Repository:
self.prepare_txn(None) # self.index, self.compact, self.segments all empty now!
segment_count = sum(1 for _ in self.io.segment_iterator())
logger.debug('Found %d segments', segment_count)
pi = ProgressIndicatorPercent(total=segment_count, msg="Checking segments %3.1f%%", step=0.1)
pi = ProgressIndicatorPercent(total=segment_count, msg='Checking segments %3.1f%%', step=0.1,
msgid='repository.check')
for i, (segment, filename) in enumerate(self.io.segment_iterator()):
pi.show(i)
if segment > transaction_id:

View file

@ -1552,17 +1552,17 @@ class ArchiverTestCase(ArchiverTestCaseBase):
list_archive = json.loads(self.cmd('list', '--json', self.repository_location + '::test'))
assert list_repo['repository'] == list_archive['repository']
files = list_archive['files']
assert len(files) == 2
file1 = files[1]
items = list_archive['items']
assert len(items) == 2
file1 = items[1]
assert file1['path'] == 'input/file1'
assert file1['size'] == 81920
list_archive = json.loads(self.cmd('list', '--json', '--format={sha256}', self.repository_location + '::test'))
assert list_repo['repository'] == list_archive['repository']
files = list_archive['files']
assert len(files) == 2
file1 = files[1]
items = list_archive['items']
assert len(items) == 2
file1 = items[1]
assert file1['path'] == 'input/file1'
assert file1['sha256'] == 'b2915eb69f260d8d3c25249195f2c8f4f716ea82ec760ae929732c0262442b2b'