mirror of
https://github.com/borgbackup/borg.git
synced 2025-12-18 15:46:20 -05:00
fixed typos and grammar (AI)
this was done by Junie AI.
This commit is contained in:
parent
ff7f2ab2ab
commit
3120f9cd1c
218 changed files with 1552 additions and 1551 deletions
4
AUTHORS
4
AUTHORS
|
|
@ -1,5 +1,5 @@
|
|||
E-mail addresses listed here are not intended for support, please see
|
||||
the `support section`_ instead.
|
||||
Email addresses listed here are not intended for support.
|
||||
Please see the `support section`_ instead.
|
||||
|
||||
.. _support section: https://borgbackup.readthedocs.io/en/stable/support.html
|
||||
|
||||
|
|
|
|||
4
Brewfile
4
Brewfile
|
|
@ -5,8 +5,8 @@ brew 'xxhash'
|
|||
brew 'openssl@3.0'
|
||||
|
||||
# osxfuse (aka macFUSE) is only required for "borg mount",
|
||||
# but won't work on github actions' workers.
|
||||
# it requires installing a kernel extension, so some users
|
||||
# but won't work on GitHub Actions' workers.
|
||||
# It requires installing a kernel extension, so some users
|
||||
# may want it and some won't.
|
||||
|
||||
#cask 'osxfuse'
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# stuff we need to include into the sdist is handled automatically by
|
||||
# The files we need to include in the sdist are handled automatically by
|
||||
# setuptools_scm - it includes all git-committed files.
|
||||
# but we want to exclude some committed files/dirs not needed in the sdist:
|
||||
# But we want to exclude some committed files/directories not needed in the sdist:
|
||||
exclude .editorconfig .gitattributes .gitignore .mailmap Vagrantfile
|
||||
prune .github
|
||||
include src/borg/platform/darwin.c src/borg/platform/freebsd.c src/borg/platform/linux.c src/borg/platform/posix.c
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
## Supported Versions
|
||||
|
||||
These borg releases are currently supported with security updates.
|
||||
These Borg releases are currently supported with security updates.
|
||||
|
||||
| Version | Supported |
|
||||
|---------|--------------------|
|
||||
|
|
@ -14,6 +14,6 @@ These borg releases are currently supported with security updates.
|
|||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
See there:
|
||||
See here:
|
||||
|
||||
https://borgbackup.readthedocs.io/en/latest/support.html#security-contact
|
||||
|
|
|
|||
2
Vagrantfile
vendored
2
Vagrantfile
vendored
|
|
@ -1,7 +1,7 @@
|
|||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
# Automated creation of testing environments / binaries on misc. platforms
|
||||
# Automated creation of testing environments/binaries on miscellaneous platforms
|
||||
|
||||
$cpus = Integer(ENV.fetch('VMCPUS', '8')) # create VMs with that many cpus
|
||||
$xdistn = Integer(ENV.fetch('XDISTN', '8')) # dispatch tests to that many pytest workers
|
||||
|
|
|
|||
6
docs/3rd_party/README
vendored
6
docs/3rd_party/README
vendored
|
|
@ -1,5 +1,5 @@
|
|||
Here we store 3rd party documentation, licenses, etc.
|
||||
Here we store third-party documentation, licenses, etc.
|
||||
|
||||
Please note that all files inside the "borg" package directory (except the
|
||||
stuff excluded in setup.py) will be INSTALLED, so don't keep docs or licenses
|
||||
Please note that all files inside the "borg" package directory (except those
|
||||
excluded in setup.py) will be installed, so do not keep docs or licenses
|
||||
there.
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ help:
|
|||
@echo " singlehtml to make a single large HTML file"
|
||||
@echo " pickle to make pickle files"
|
||||
@echo " json to make JSON files"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " htmlhelp to make HTML files and an HTML help project"
|
||||
@echo " qthelp to make HTML files and a qthelp project"
|
||||
@echo " devhelp to make HTML files and a Devhelp project"
|
||||
@echo " epub to make an epub"
|
||||
|
|
|
|||
2
docs/_templates/globaltoc.html
vendored
2
docs/_templates/globaltoc.html
vendored
|
|
@ -1,6 +1,6 @@
|
|||
<div class="sidebar-block">
|
||||
<div class="sidebar-toc">
|
||||
{# Restrict the sidebar toc depth to two levels while generating command usage pages.
|
||||
{# Restrict the sidebar ToC depth to two levels while generating command usage pages.
|
||||
This avoids superfluous entries for each "Description" and "Examples" heading. #}
|
||||
{% if pagename.startswith("usage/") and pagename not in (
|
||||
"usage/general", "usage/help", "usage/debug", "usage/notes",
|
||||
|
|
|
|||
12
docs/_templates/layout.html
vendored
12
docs/_templates/layout.html
vendored
|
|
@ -1,6 +1,6 @@
|
|||
{%- extends "basic/layout.html" %}
|
||||
|
||||
{# Do this so that bootstrap is included before the main css file #}
|
||||
{# Do this so that Bootstrap is included before the main CSS file. #}
|
||||
{%- block htmltitle %}
|
||||
{% set script_files = script_files + ["_static/myscript.js"] %}
|
||||
<!-- Licensed under the Apache 2.0 License -->
|
||||
|
|
@ -20,7 +20,7 @@
|
|||
{{ super() }}
|
||||
{% endblock %}
|
||||
|
||||
{# Displays the URL for the homepage if it's set or the master_doc if it is not #}
|
||||
{# Displays the URL for the homepage if it's set, or the master_doc if it is not. #}
|
||||
{% macro homepage() -%}
|
||||
{%- if theme_homepage %}
|
||||
{%- if hasdoc(theme_homepage) %}
|
||||
|
|
@ -33,7 +33,7 @@
|
|||
{%- endif %}
|
||||
{%- endmacro %}
|
||||
|
||||
{# Displays the URL for the tospage if it's set or falls back to homepage macro #}
|
||||
{# Displays the URL for the tospage if it's set, or falls back to the homepage macro. #}
|
||||
{% macro tospage() -%}
|
||||
{%- if theme_tospage %}
|
||||
{%- if hasdoc(theme_tospage) %}
|
||||
|
|
@ -46,7 +46,7 @@
|
|||
{%- endif %}
|
||||
{%- endmacro %}
|
||||
|
||||
{# Displays the URL for the projectpage if it's set or falls back to homepage macro #}
|
||||
{# Displays the URL for the projectpage if it's set, or falls back to the homepage macro. #}
|
||||
{% macro projectlink() -%}
|
||||
{%- if theme_projectlink %}
|
||||
{%- if hasdoc(theme_projectlink) %}
|
||||
|
|
@ -59,7 +59,7 @@
|
|||
{%- endif %}
|
||||
{%- endmacro %}
|
||||
|
||||
{# Displays the next and previous links both before and after content #}
|
||||
{# Displays the next and previous links both before and after the content. #}
|
||||
{% macro render_relations() -%}
|
||||
{% if prev or next %}
|
||||
<div class="footer-relations">
|
||||
|
|
@ -82,7 +82,7 @@
|
|||
<div id="left-column">
|
||||
<div class="sphinxsidebar">
|
||||
{%- if sidebars != None %}
|
||||
{#- new style sidebar: explicitly include/exclude templates #}
|
||||
{#- New-style sidebar: explicitly include/exclude templates. #}
|
||||
{%- for sidebartemplate in sidebars %}
|
||||
{%- include sidebartemplate %}
|
||||
{%- endfor %}
|
||||
|
|
|
|||
|
|
@ -5,8 +5,8 @@
|
|||
Borg documentation
|
||||
==================
|
||||
|
||||
.. when you add an element here, do not forget to add it to index.rst
|
||||
.. Note: Some things are in appendices (see latex_appendices in conf.py)
|
||||
.. When you add an element here, do not forget to add it to index.rst.
|
||||
.. Note: Some things are in appendices (see latex_appendices in conf.py).
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ borg 1.2.x/1.4.x to borg 2.0
|
|||
|
||||
Compatibility notes:
|
||||
|
||||
- this is a major "breaking" release that is not compatible with existing repos.
|
||||
- This is a major "breaking" release that is not compatible with existing repositories.
|
||||
|
||||
We tried to put all the necessary "breaking" changes into this release, so we
|
||||
hopefully do not need another breaking release in the near future. The changes
|
||||
|
|
@ -33,24 +33,24 @@ Compatibility notes:
|
|||
you must have followed the upgrade instructions at top of the change log
|
||||
relating to manifest and archive TAMs (borg2 just requires these TAMs now).
|
||||
|
||||
- command line syntax was changed, scripts and wrappers will need changes:
|
||||
- Command-line syntax was changed; scripts and wrappers will need changes:
|
||||
|
||||
- you will usually either export BORG_REPO=<MYREPO> into your environment or
|
||||
- You will usually either export BORG_REPO=<MYREPO> into your environment or
|
||||
call borg like: "borg -r <MYREPO> <COMMAND>".
|
||||
in the docs, we usually omit "-r ..." for brevity.
|
||||
- the scp-style REPO syntax was removed, please use ssh://..., #6697
|
||||
- ssh:// URLs: removed support for /~otheruser/, /~/ and /./, #6855.
|
||||
In the docs, we usually omit "-r ..." for brevity.
|
||||
- The scp-style REPO syntax was removed; please use ssh://..., #6697
|
||||
- ssh:// URLs: Removed support for /~otheruser/, /~/ and /./, #6855.
|
||||
New format:
|
||||
|
||||
- ssh://user@host:port/relative/path
|
||||
- ssh://user@host:port//absolute/path
|
||||
- -P / --prefix option was removed, please use the similar -a / --match-archives.
|
||||
- archive names don't need to be unique anymore. to the contrary:
|
||||
it is now strongly recommended to use the identical name for borg create
|
||||
- -P / --prefix option was removed; please use the similar -a / --match-archives.
|
||||
- Archive names don't need to be unique anymore. To the contrary:
|
||||
It is now strongly recommended to use the identical name for borg create
|
||||
within the same series of archives to make borg work more efficiently.
|
||||
the name now identifies a series of archive, to identify a single archive
|
||||
please use aid:<archive-hash-prefix>, e.g.: borg delete aid:d34db33f
|
||||
- in case you do NOT want to adopt the "series name" way of naming archives
|
||||
The name now identifies a series of archives; to identify a single archive,
|
||||
please use aid:<archive-hash-prefix>, e.g., borg delete aid:d34db33f
|
||||
- In case you do NOT want to adopt the "series name" way of naming archives
|
||||
(like "myarchive") as we recommend, but keep using always-changing names
|
||||
(like "myserver-myarchive-20241231"), you can do that, but then you must
|
||||
make use of BORG_FILES_CACHE_SUFFIX and either set it to a constant suffix
|
||||
|
|
@ -60,9 +60,9 @@ Compatibility notes:
|
|||
greater than the count of different archives series you write to that repo.
|
||||
Usually borg uses a different files cache suffix per archive (series) name
|
||||
and defaults to BORG_FILES_CACHE_TTL=2 because that is sufficient for that.
|
||||
- the archive id is always given separately from the repository
|
||||
(differently than with borg 1.x you must not give repo::archive).
|
||||
- the series name or archive id is either given as a positional parameter,
|
||||
- The archive ID is always given separately from the repository.
|
||||
Unlike in borg 1.x, you must not give repo::archive.
|
||||
- The series name or archive ID is either given as a positional parameter,
|
||||
like:
|
||||
|
||||
- borg create documents ~/Documents
|
||||
|
|
|
|||
|
|
@ -11,9 +11,9 @@ Compatibility notes:
|
|||
- The new default logging level is WARNING. Previously, it was INFO, which was
|
||||
more verbose. Use -v (or --info) to show once again log level INFO messages.
|
||||
See the "general" section in the usage docs.
|
||||
- for borg create, you need --list (additionally to -v) to see the long file
|
||||
- For borg create, you need --list (in addition to -v) to see the long file
|
||||
list (was needed so you can have e.g. --stats alone without the long list)
|
||||
- see below about BORG_DELETE_I_KNOW_WHAT_I_AM_DOING (was:
|
||||
- See below about BORG_DELETE_I_KNOW_WHAT_I_AM_DOING (was:
|
||||
BORG_CHECK_I_KNOW_WHAT_I_AM_DOING)
|
||||
|
||||
Bug fixes:
|
||||
|
|
@ -48,7 +48,7 @@ New features:
|
|||
- update progress indication more often (e.g. for borg create within big
|
||||
files or for borg check repo), #500
|
||||
- finer chunker granularity for items metadata stream, #547, #487
|
||||
- borg create --list now used (additionally to -v) to enable the verbose
|
||||
- borg create --list is now used (in addition to -v) to enable the verbose
|
||||
file list output
|
||||
- display borg version below tracebacks, #532
|
||||
|
||||
|
|
@ -86,17 +86,17 @@ Version 0.29.0 (2015-12-13)
|
|||
|
||||
Compatibility notes:
|
||||
|
||||
- when upgrading to 0.29.0 you need to upgrade client as well as server
|
||||
installations due to the locking and commandline interface changes otherwise
|
||||
you'll get an error msg about a RPC protocol mismatch or a wrong commandline
|
||||
- When upgrading to 0.29.0, you need to upgrade client as well as server
|
||||
installations due to the locking and command-line interface changes; otherwise
|
||||
you'll get an error message about an RPC protocol mismatch or a wrong command-line
|
||||
option.
|
||||
if you run a server that needs to support both old and new clients, it is
|
||||
If you run a server that needs to support both old and new clients, it is
|
||||
suggested that you have a "borg-0.28.2" and a "borg-0.29.0" command.
|
||||
clients then can choose via e.g. "borg --remote-path=borg-0.29.0 ...".
|
||||
- the default waiting time for a lock changed from infinity to 1 second for a
|
||||
better interactive user experience. if the repo you want to access is
|
||||
- The default waiting time for a lock changed from infinity to 1 second for a
|
||||
better interactive user experience. If the repo you want to access is
|
||||
currently locked, borg will now terminate after 1s with an error message.
|
||||
if you have scripts that shall wait for the lock for a longer time, use
|
||||
If you have scripts that should wait for the lock for a longer time, use
|
||||
--lock-wait N (with N being the maximum wait time in seconds).
|
||||
|
||||
Bug fixes:
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ no matter what encryption mode they use, including "none"):
|
|||
for normal production operations - it is only needed once to get the archives in a
|
||||
repository into a good state. All archives have a valid TAM now.
|
||||
|
||||
Vulnerability time line:
|
||||
Vulnerability timeline:
|
||||
|
||||
* 2023-06-13: Vulnerability discovered during code review by Thomas Waldmann
|
||||
* 2023-06-13...: Work on fixing the issue, upgrade procedure, docs.
|
||||
|
|
|
|||
10
docs/conf.py
10
docs/conf.py
|
|
@ -1,7 +1,7 @@
|
|||
# documentation build configuration file, created by
|
||||
# Documentation build configuration file, created by
|
||||
# sphinx-quickstart on Sat Sep 10 18:18:25 2011.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its containing dir.
|
||||
# This file is execfile()d with the current directory set to its containing directory.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
|
|
@ -74,7 +74,7 @@ exclude_patterns = ["_build"]
|
|||
# default_role = None
|
||||
|
||||
# The Borg docs contain no or very little Python docs.
|
||||
# Thus, the primary domain is rst.
|
||||
# Thus, the primary domain is RST.
|
||||
primary_domain = "rst"
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
|
|
@ -97,8 +97,8 @@ pygments_style = "sphinx"
|
|||
|
||||
# -- Options for HTML output ---------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of built-in themes.
|
||||
import guzzle_sphinx_theme
|
||||
|
||||
html_theme_path = guzzle_sphinx_theme.html_theme_path()
|
||||
|
|
|
|||
|
|
@ -14,13 +14,13 @@ systemd and udev.
|
|||
Overview
|
||||
--------
|
||||
|
||||
An udev rule is created to trigger on the addition of block devices. The rule contains a tag
|
||||
that triggers systemd to start a oneshot service. The oneshot service executes a script in
|
||||
A udev rule is created to trigger on the addition of block devices. The rule contains a tag
|
||||
that triggers systemd to start a one-shot service. The one-shot service executes a script in
|
||||
the standard systemd service environment, which automatically captures stdout/stderr and
|
||||
logs it to the journal.
|
||||
|
||||
The script mounts the added block device, if it is a registered backup drive, and creates
|
||||
backups on it. When done, it optionally unmounts the file system and spins the drive down,
|
||||
The script mounts the added block device if it is a registered backup drive and creates
|
||||
backups on it. When done, it optionally unmounts the filesystem and spins the drive down,
|
||||
so that it may be physically disconnected.
|
||||
|
||||
Configuring the system
|
||||
|
|
@ -29,7 +29,8 @@ Configuring the system
|
|||
First, create the ``/etc/backups`` directory (as root).
|
||||
All configuration goes into this directory.
|
||||
|
||||
Find out the ID of the partition table of your backup disk (here assumed to be /dev/sdz):
|
||||
Find out the ID of the partition table of your backup disk (here assumed to be /dev/sdz)::
|
||||
|
||||
lsblk --fs -o +PTUUID /dev/sdz
|
||||
|
||||
Then, create ``/etc/backups/80-backup.rules`` with the following content (all on one line)::
|
||||
|
|
@ -46,8 +47,8 @@ launch the "automatic-backup" service, which we will create next, as the
|
|||
Type=oneshot
|
||||
ExecStart=/etc/backups/run.sh
|
||||
|
||||
Now, create the main backup script, ``/etc/backups/run.sh``. Below is a template,
|
||||
modify it to suit your needs (e.g. more backup sets, dumping databases etc.).
|
||||
Now, create the main backup script, ``/etc/backups/run.sh``. Below is a template;
|
||||
modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
|
@ -93,8 +94,8 @@ modify it to suit your needs (e.g. more backup sets, dumping databases etc.).
|
|||
|
||||
echo "Disk $uuid is a backup disk"
|
||||
partition_path=/dev/disk/by-uuid/$uuid
|
||||
# Mount file system if not already done. This assumes that if something is already
|
||||
# mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
|
||||
# Mount filesystem if not already done. This assumes that if something is already
|
||||
# mounted at $MOUNTPOINT, it is the backup drive. It will not find the drive if
|
||||
# it was mounted somewhere else.
|
||||
findmnt $MOUNTPOINT >/dev/null || mount $partition_path $MOUNTPOINT
|
||||
drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
|
||||
|
|
@ -110,8 +111,8 @@ modify it to suit your needs (e.g. more backup sets, dumping databases etc.).
|
|||
# Set BORG_PASSPHRASE or BORG_PASSCOMMAND somewhere around here, using export,
|
||||
# if encryption is used.
|
||||
|
||||
# No one can answer if Borg asks these questions, it is better to just fail quickly
|
||||
# instead of hanging.
|
||||
# Because no one can answer these questions non-interactively, it is better to
|
||||
# fail quickly instead of hanging.
|
||||
export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
|
||||
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no
|
||||
|
||||
|
|
@ -127,8 +128,8 @@ modify it to suit your needs (e.g. more backup sets, dumping databases etc.).
|
|||
$TARGET::$DATE-$$-system \
|
||||
/ /boot
|
||||
|
||||
# /home is often a separate partition / file system.
|
||||
# Even if it isn't (add --exclude /home above), it probably makes sense
|
||||
# /home is often a separate partition/filesystem.
|
||||
# Even if it is not (add --exclude /home above), it probably makes sense
|
||||
# to have /home in a separate archive.
|
||||
borg create $BORG_OPTS \
|
||||
--exclude 'sh:home/*/.cache' \
|
||||
|
|
@ -150,17 +151,16 @@ modify it to suit your needs (e.g. more backup sets, dumping databases etc.).
|
|||
fi
|
||||
|
||||
Create the ``/etc/backups/autoeject`` file to have the script automatically eject the drive
|
||||
after creating the backup. Rename the file to something else (e.g. ``/etc/backups/autoeject-no``)
|
||||
when you want to do something with the drive after creating backups (e.g running check).
|
||||
after creating the backup. Rename the file to something else (e.g., ``/etc/backups/autoeject-no``)
|
||||
when you want to do something with the drive after creating backups (e.g., running checks).
|
||||
|
||||
Create the ``/etc/backups/backup-suspend`` file if the machine should suspend after completing
|
||||
the backup. Don't forget to disconnect the device physically before resuming,
|
||||
otherwise you'll enter a cycle. You can also add an option to power down instead.
|
||||
|
||||
Create an empty ``/etc/backups/backup.disks`` file, you'll register your backup drives
|
||||
there.
|
||||
Create an empty ``/etc/backups/backup.disks`` file, in which you will register your backup drives.
|
||||
|
||||
The last part is actually to enable the udev rules and services:
|
||||
Finally, enable the udev rules and services:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
|
@ -173,13 +173,13 @@ Adding backup hard drives
|
|||
-------------------------
|
||||
|
||||
Connect your backup hard drive. Format it, if not done already.
|
||||
Find the UUID of the file system that backups should be stored on::
|
||||
Find the UUID of the filesystem on which backups should be stored::
|
||||
|
||||
lsblk -o+uuid,label
|
||||
|
||||
Note the UUID into the ``/etc/backups/backup.disks`` file.
|
||||
Record the UUID in the ``/etc/backups/backup.disks`` file.
|
||||
|
||||
Mount the drive to /mnt/backup.
|
||||
Mount the drive at /mnt/backup.
|
||||
|
||||
Initialize a Borg repository at the location indicated by ``TARGET``::
|
||||
|
||||
|
|
@ -197,14 +197,14 @@ See backup logs using journalctl::
|
|||
Security considerations
|
||||
-----------------------
|
||||
|
||||
The script as shown above will mount any file system with an UUID listed in
|
||||
``/etc/backups/backup.disks``. The UUID check is a safety / annoyance-reduction
|
||||
The script as shown above will mount any filesystem with a UUID listed in
|
||||
``/etc/backups/backup.disks``. The UUID check is a safety/annoyance-reduction
|
||||
mechanism to keep the script from blowing up whenever a random USB thumb drive is connected.
|
||||
It is not meant as a security mechanism. Mounting file systems and reading repository
|
||||
data exposes additional attack surfaces (kernel file system drivers,
|
||||
possibly user space services and Borg itself). On the other hand, someone
|
||||
It is not meant as a security mechanism. Mounting filesystems and reading repository
|
||||
data exposes additional attack surfaces (kernel filesystem drivers,
|
||||
possibly userspace services, and Borg itself). On the other hand, someone
|
||||
standing right next to your computer can attempt a lot of attacks, most of which
|
||||
are easier to do than e.g. exploiting file systems (installing a physical key logger,
|
||||
are easier to do than, e.g., exploiting filesystems (installing a physical keylogger,
|
||||
DMA attacks, stealing the machine, ...).
|
||||
|
||||
Borg ensures that backups are not created on random drives that "just happen"
|
||||
|
|
|
|||
|
|
@ -5,44 +5,44 @@
|
|||
Central repository server with Ansible or Salt
|
||||
==============================================
|
||||
|
||||
This section will give an example how to set up a borg repository server for multiple
|
||||
This section gives an example of how to set up a Borg repository server for multiple
|
||||
clients.
|
||||
|
||||
Machines
|
||||
--------
|
||||
|
||||
There are multiple machines used in this section and will further be named by their
|
||||
respective fully qualified domain name (fqdn).
|
||||
This section uses multiple machines, referred to by their
|
||||
respective fully qualified domain names (FQDNs).
|
||||
|
||||
* The backup server: `backup01.srv.local`
|
||||
* The clients:
|
||||
|
||||
- John Doe's desktop: `johndoe.clnt.local`
|
||||
- Webserver 01: `web01.srv.local`
|
||||
- Web server 01: `web01.srv.local`
|
||||
- Application server 01: `app01.srv.local`
|
||||
|
||||
User and group
|
||||
--------------
|
||||
|
||||
The repository server needs to have only one UNIX user for all the clients.
|
||||
The repository server should have a single UNIX user for all the clients.
|
||||
Recommended user and group with additional settings:
|
||||
|
||||
* User: `backup`
|
||||
* Group: `backup`
|
||||
* Shell: `/bin/bash` (or other capable to run the `borg serve` command)
|
||||
* Shell: `/bin/bash` (or another shell capable of running the `borg serve` command)
|
||||
* Home: `/home/backup`
|
||||
|
||||
Most clients shall initiate a backup from the root user to catch all
|
||||
users, groups and permissions (e.g. when backing up `/home`).
|
||||
Most clients should initiate a backup as the root user to capture all
|
||||
users, groups, and permissions (e.g., when backing up `/home`).
|
||||
|
||||
Folders
|
||||
-------
|
||||
|
||||
The following folder tree layout is suggested on the repository server:
|
||||
The following directory layout is suggested on the repository server:
|
||||
|
||||
* User home directory, /home/backup
|
||||
* Repositories path (storage pool): /home/backup/repos
|
||||
* Clients restricted paths (`/home/backup/repos/<client fqdn>`):
|
||||
* Clients’ restricted paths (`/home/backup/repos/<client fqdn>`):
|
||||
|
||||
- johndoe.clnt.local: `/home/backup/repos/johndoe.clnt.local`
|
||||
- web01.srv.local: `/home/backup/repos/web01.srv.local`
|
||||
|
|
@ -60,10 +60,10 @@ but no other directories. You can allow a client to access several separate dire
|
|||
which could make sense if multiple machines belong to one person which should then have access to all the
|
||||
backups of their machines.
|
||||
|
||||
There is only one ssh key per client allowed. Keys are added for ``johndoe.clnt.local``, ``web01.srv.local`` and
|
||||
``app01.srv.local``. But they will access the backup under only one UNIX user account as:
|
||||
Only one SSH key per client is allowed. Keys are added for ``johndoe.clnt.local``, ``web01.srv.local`` and
|
||||
``app01.srv.local``. They will access the backup under a single UNIX user account as
|
||||
``backup@backup01.srv.local``. Every key in ``$HOME/.ssh/authorized_keys`` has a
|
||||
forced command and restrictions applied as shown below:
|
||||
forced command and restrictions applied, as shown below:
|
||||
|
||||
::
|
||||
|
||||
|
|
@ -73,19 +73,19 @@ forced command and restrictions applied as shown below:
|
|||
|
||||
.. note:: The text shown above needs to be written on a single line!
|
||||
|
||||
The options which are added to the key will perform the following:
|
||||
The options added to the key perform the following:
|
||||
|
||||
1. Change working directory
|
||||
2. Run ``borg serve`` restricted to the client base path
|
||||
3. Restrict ssh and do not allow stuff which imposes a security risk
|
||||
|
||||
Due to the ``cd`` command we use, the server automatically changes the current
|
||||
working directory. Then client doesn't need to have knowledge of the absolute
|
||||
Because of the ``cd`` command, the server automatically changes the current
|
||||
working directory. The client then does not need to know the absolute
|
||||
or relative remote repository path and can directly access the repositories at
|
||||
``ssh://<user>@<host>/./<repo>``.
|
||||
|
||||
.. note:: The setup above ignores all client given commandline parameters
|
||||
which are normally appended to the `borg serve` command.
|
||||
.. note:: The setup above ignores all client-given command line parameters
|
||||
that are normally appended to the `borg serve` command.
|
||||
|
||||
Client
|
||||
------
|
||||
|
|
@ -96,15 +96,15 @@ The client needs to initialize the `pictures` repository like this:
|
|||
|
||||
borg init ssh://backup@backup01.srv.local/./pictures
|
||||
|
||||
Or with the full path (should actually never be used, as only for demonstration purposes).
|
||||
The server should automatically change the current working directory to the `<client fqdn>` folder.
|
||||
Or with the full path (this should not be used in practice; it is only for demonstration purposes).
|
||||
The server automatically changes the current working directory to the `<client fqdn>` directory.
|
||||
|
||||
::
|
||||
|
||||
borg init ssh://backup@backup01.srv.local/home/backup/repos/johndoe.clnt.local/pictures
|
||||
|
||||
When `johndoe.clnt.local` tries to access a not restricted path the following error is raised.
|
||||
John Doe tries to back up into the Web 01 path:
|
||||
When `johndoe.clnt.local` tries to access a path outside its restriction, the following error is raised.
|
||||
John Doe tries to back up into the web01 path:
|
||||
|
||||
::
|
||||
|
||||
|
|
|
|||
|
|
@ -5,21 +5,21 @@
|
|||
Hosting repositories
|
||||
====================
|
||||
|
||||
This sections shows how to provide repository storage securely for users.
|
||||
This section shows how to provide repository storage securely for users.
|
||||
|
||||
Repositories are accessed through SSH. Each user of the service should
|
||||
have her own login which is only able to access the user's files.
|
||||
Technically it would be possible to have multiple users share one login,
|
||||
have their own login, which is only able to access that user's files.
|
||||
Technically, it is possible to have multiple users share one login;
|
||||
however, separating them is better. Separate logins increase isolation
|
||||
and are thus an additional layer of security and safety for both the
|
||||
and provide an additional layer of security and safety for both the
|
||||
provider and the users.
|
||||
|
||||
For example, if a user manages to breach ``borg serve`` then she can
|
||||
only damage her own data (assuming that the system does not have further
|
||||
For example, if a user manages to breach ``borg serve``, they can
|
||||
only damage their own data (assuming that the system does not have further
|
||||
vulnerabilities).
|
||||
|
||||
Use the standard directory structure of the operating system. Each user
|
||||
is assigned a home directory and repositories of the user reside in her
|
||||
is assigned a home directory, and that user's repositories reside in their
|
||||
home directory.
|
||||
|
||||
The following ``~user/.ssh/authorized_keys`` file is the most important
|
||||
|
|
@ -47,7 +47,7 @@ and X11 forwarding, as well as disabling PTY allocation and execution of ~/.ssh/
|
|||
If any future restriction capabilities are added to authorized_keys
|
||||
files they will be included in this set.
|
||||
|
||||
The ``command`` keyword forces execution of the specified command line
|
||||
The ``command`` keyword forces execution of the specified command
|
||||
upon login. This must be ``borg serve``. The ``--restrict-to-repository``
|
||||
option permits access to exactly **one** repository. It can be given
|
||||
multiple times to permit access to more than one repository.
|
||||
|
|
|
|||
|
|
@ -6,10 +6,10 @@ Backing up entire disk images
|
|||
|
||||
Backing up disk images can still be efficient with Borg because its `deduplication`_
|
||||
technique makes sure only the modified parts of the file are stored. Borg also has
|
||||
optional simple sparse file support for extract.
|
||||
optional simple sparse file support for extraction.
|
||||
|
||||
It is of utmost importancy to pin down the disk you want to backup.
|
||||
You need to use the SERIAL for that.
|
||||
It is of utmost importance to pin down the disk you want to back up.
|
||||
Use the disk's SERIAL for that.
|
||||
Use:
|
||||
|
||||
.. code-block:: bash
|
||||
|
|
@ -25,10 +25,10 @@ Use:
|
|||
echo "${PARTITIONS[@]}"
|
||||
echo "Disk Identifier: $DISK_ID"
|
||||
|
||||
# Use the following line to perform a borg backup for the full disk:
|
||||
# Use the following line to perform a Borg backup for the full disk:
|
||||
# borg create --read-special disk-backup "$DISK_ID"
|
||||
|
||||
# Use the following to perform a borg backup for all partitions of the disk
|
||||
# Use the following to perform a Borg backup for all partitions of the disk
|
||||
# borg create --read-special partitions-backup "${PARTITIONS[@]}"
|
||||
|
||||
# Example output:
|
||||
|
|
@ -47,7 +47,7 @@ Decreasing the size of image backups
|
|||
------------------------------------
|
||||
|
||||
Disk images are as large as the full disk when uncompressed and might not get much
|
||||
smaller post-deduplication after heavy use because virtually all file systems don't
|
||||
smaller post-deduplication after heavy use because virtually all filesystems do not
|
||||
actually delete file data on disk but instead delete the filesystem entries referencing
|
||||
the data. Therefore, if a disk nears capacity and files are deleted again, the change
|
||||
will barely decrease the space it takes up when compressed and deduplicated. Depending
|
||||
|
|
@ -115,8 +115,8 @@ Because the partitions were zeroed in place, restoration is only one command::
|
|||
mounted, is simply to ``dd`` from ``/dev/zero`` to a temporary file and delete
|
||||
it. This is ill-advised for the reasons mentioned in the ``zerofree`` man page:
|
||||
|
||||
- it is slow
|
||||
- it makes the disk image (temporarily) grow to its maximal extent
|
||||
- it is slow.
|
||||
- it makes the disk image (temporarily) grow to its maximal extent.
|
||||
- it (temporarily) uses all free space on the disk, so other concurrent write actions may fail.
|
||||
|
||||
Virtual machines
|
||||
|
|
@ -129,7 +129,7 @@ regular file to Borg with the same issues as regular files when it comes to conc
|
|||
reading and writing from the same file.
|
||||
|
||||
For backing up live VMs use filesystem snapshots on the VM host, which establishes
|
||||
crash-consistency for the VM images. This means that with most file systems (that
|
||||
crash-consistency for the VM images. This means that with most filesystems (that
|
||||
are journaling) the FS will always be fine in the backup (but may need a journal
|
||||
replay to become accessible).
|
||||
|
||||
|
|
@ -145,10 +145,10 @@ to reach application-consistency; it's a broad and complex issue that cannot be
|
|||
in entirety here.
|
||||
|
||||
Hypervisor snapshots capturing most of the VM's state can also be used for backups and
|
||||
can be a better alternative to pure file system based snapshots of the VM's disk, since
|
||||
can be a better alternative to pure filesystem-based snapshots of the VM's disk, since
|
||||
no state is lost. Depending on the application this can be the easiest and most reliable
|
||||
way to create application-consistent backups.
|
||||
|
||||
Borg doesn't intend to address these issues due to their huge complexity and
|
||||
Borg does not intend to address these issues due to their huge complexity and
|
||||
platform/software dependency. Combining Borg with the mechanisms provided by the platform
|
||||
(snapshots, hypervisor features) will be the best approach to start tackling them.
|
||||
|
|
|
|||
|
|
@ -6,29 +6,29 @@
|
|||
Backing up using a non-root user
|
||||
================================
|
||||
|
||||
This section describes how to run borg as a non-root user and still be able to
|
||||
backup every file on the system.
|
||||
This section describes how to run Borg as a non-root user and still be able to
|
||||
back up every file on the system.
|
||||
|
||||
Normally borg is run as the root user to bypass all filesystem permissions and
|
||||
be able to read all files. But in theory this also allows borg to modify or
|
||||
delete files on your system, in case of a bug for example.
|
||||
Normally, Borg is run as the root user to bypass all filesystem permissions and
|
||||
be able to read all files. However, in theory this also allows Borg to modify or
|
||||
delete files on your system (for example, in case of a bug).
|
||||
|
||||
To eliminate this possibility, we can run borg as a non-root user and give it read-only
|
||||
To eliminate this possibility, we can run Borg as a non-root user and give it read-only
|
||||
permissions to all files on the system.
|
||||
|
||||
|
||||
Using Linux capabilities inside a systemd service
|
||||
=================================================
|
||||
|
||||
One way to do so, is to use linux `capabilities
|
||||
One way to do so is to use Linux `capabilities
|
||||
<https://man7.org/linux/man-pages/man7/capabilities.7.html>`_ within a systemd
|
||||
service.
|
||||
|
||||
Linux capabilities allow us to give parts of the privileges the root user has to
|
||||
a non-root user. This works on a per-thread level and does not give the permission
|
||||
Linux capabilities allow us to grant parts of the root user’s privileges to
|
||||
a non-root user. This works on a per-thread level and does not grant permissions
|
||||
to the non-root user as a whole.
|
||||
|
||||
For this we need to run our backup script from a systemd service and use the `AmbientCapabilities
|
||||
For this, we need to run the backup script from a systemd service and use the `AmbientCapabilities
|
||||
<https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#AmbientCapabilities=>`_
|
||||
option added in systemd 229.
|
||||
|
||||
|
|
@ -46,21 +46,21 @@ A very basic unit file would look like this:
|
|||
|
||||
AmbientCapabilities=CAP_DAC_READ_SEARCH
|
||||
|
||||
The ``CAP_DAC_READ_SEARCH`` capability gives borg read-only access to all files and directories on the system.
|
||||
The ``CAP_DAC_READ_SEARCH`` capability gives Borg read-only access to all files and directories on the system.
|
||||
|
||||
This service can then be started manually using ``systemctl start``, a systemd timer or other methods.
|
||||
|
||||
Restore considerations
|
||||
======================
|
||||
|
||||
When restoring files, the root user should be used. When using the non-root user, borg extract will
|
||||
change all files to be owned by the non-root user. Using borg mount will not allow the non-root user
|
||||
to access files that it would not have access to on the system itself.
|
||||
Use the root user when restoring files. If you use the non-root user, ``borg extract`` will
|
||||
change ownership of all restored files to the non-root user. Using ``borg mount`` will not allow the
|
||||
non-root user to access files it would not be able to access on the system itself.
|
||||
|
||||
Other than that, the same restore process, that would be used when running the backup as root, can be used.
|
||||
Other than that, you can use the same restore process you would use when running the backup as root.
|
||||
|
||||
.. warning::
|
||||
|
||||
When using a local repo and running borg commands as root, make sure to only use commands that do not
|
||||
modify the repo itself, like extract or mount. Modifying the repo using the root user will break
|
||||
the repo for the non-root user, since some files inside the repo will now be owned by root.
|
||||
When using a local repository and running Borg commands as root, make sure to use only commands that do not
|
||||
modify the repository itself, such as extract or mount. Modifying the repository as root will break it for the
|
||||
non-root user, since some files inside the repository will then be owned by root.
|
||||
|
|
|
|||
|
|
@ -6,12 +6,12 @@
|
|||
Backing up in pull mode
|
||||
=======================
|
||||
|
||||
Typically the borg client connects to a backup server using SSH as a transport
|
||||
Typically the Borg client connects to a backup server using SSH as a transport
|
||||
when initiating a backup. This is referred to as push mode.
|
||||
|
||||
If you however require the backup server to initiate the connection or prefer
|
||||
However, if you require the backup server to initiate the connection, or prefer
|
||||
it to initiate the backup run, one of the following workarounds is required to
|
||||
allow such a pull mode setup.
|
||||
allow such a pull-mode setup.
|
||||
|
||||
A common use case for pull mode is to back up a remote server to a local personal
|
||||
computer.
|
||||
|
|
@ -19,38 +19,38 @@ computer.
|
|||
SSHFS
|
||||
=====
|
||||
|
||||
Assuming you have a pull backup system set up with borg, where a backup server
|
||||
pulls the data from the target via SSHFS. In this mode, the backup client's file
|
||||
system is mounted remotely on the backup server. Pull mode is even possible if
|
||||
Assume you have a pull backup system set up with Borg, where a backup server
|
||||
pulls data from the target via SSHFS. In this mode, the backup client's filesystem
|
||||
is mounted remotely on the backup server. Pull mode is even possible if
|
||||
the SSH connection must be established by the client via a remote tunnel. Other
|
||||
network file systems like NFS or SMB could be used as well, but SSHFS is very
|
||||
simple to set up and probably the most secure one.
|
||||
|
||||
There are some restrictions caused by SSHFS. For example, unless you define UID
|
||||
and GID mappings when mounting via ``sshfs``, owners and groups of the mounted
|
||||
file system will probably change, and you may not have access to those files if
|
||||
BorgBackup is not run with root privileges.
|
||||
filesystem will probably change, and you may not have access to those files if
|
||||
Borg is not run with root privileges.
|
||||
|
||||
SSHFS is a FUSE file system and uses the SFTP protocol, so there may be also
|
||||
other unsupported features that the actual implementations of sshfs, libfuse and
|
||||
sftp on the backup server do not support, like file name encodings, ACLs, xattrs
|
||||
or flags. So there is no guarantee that you are able to restore a system
|
||||
SSHFS is a FUSE filesystem and uses the SFTP protocol, so there may also be
|
||||
unsupported features that the actual implementations of SSHFS, libfuse, and
|
||||
SFTP on the backup server do not support, like filename encodings, ACLs, xattrs,
|
||||
or flags. Therefore, there is no guarantee that you can restore a system
|
||||
completely in every aspect from such a backup.
|
||||
|
||||
.. warning::
|
||||
|
||||
To mount the client's root file system you will need root access to the
|
||||
client. This contradicts to the usual threat model of BorgBackup, where
|
||||
clients don't need to trust the backup server (data is encrypted). In pull
|
||||
To mount the client's root filesystem you will need root access to the
|
||||
client. This contradicts the usual threat model of Borg, where
|
||||
clients do not need to trust the backup server (data is encrypted). In pull
|
||||
mode the server (when logged in as root) could cause unlimited damage to the
|
||||
client. Therefore, pull mode should be used only from servers you do fully
|
||||
client. Therefore, pull mode should be used only with servers you fully
|
||||
trust!
|
||||
|
||||
.. warning::
|
||||
|
||||
Additionally, while being chrooted into the client's root file system,
|
||||
code from the client will be executed. Thus, you should only do that when
|
||||
fully trusting the client.
|
||||
Additionally, while chrooted into the client's root filesystem,
|
||||
code from the client will be executed. Therefore, you should do this only when
|
||||
you fully trust the client.
|
||||
|
||||
.. warning::
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Development
|
|||
This chapter will get you started with Borg development.
|
||||
|
||||
Borg is written in Python (with a little bit of Cython and C for
|
||||
the performance critical parts).
|
||||
the performance-critical parts).
|
||||
|
||||
Contributions
|
||||
-------------
|
||||
|
|
@ -52,14 +52,14 @@ Borg development happens on the ``master`` branch and uses GitHub pull
|
|||
requests (if you don't have GitHub or don't want to use it you can
|
||||
send smaller patches via the borgbackup mailing list to the maintainers).
|
||||
|
||||
Stable releases are maintained on maintenance branches named ``x.y-maint``, eg.
|
||||
Stable releases are maintained on maintenance branches named ``x.y-maint``, e.g.,
|
||||
the maintenance branch of the 1.4.x series is ``1.4-maint``.
|
||||
|
||||
Most PRs should be filed against the ``master`` branch. Only if an
|
||||
issue affects **only** a particular maintenance branch a PR should be
|
||||
filed against it directly.
|
||||
|
||||
While discussing / reviewing a PR it will be decided whether the
|
||||
While discussing/reviewing a PR it will be decided whether the
|
||||
change should be applied to maintenance branches. Each maintenance
|
||||
branch has a corresponding *backport/x.y-maint* label, which will then
|
||||
be applied.
|
||||
|
|
@ -105,8 +105,8 @@ were collected:
|
|||
|
||||
Previously (until release 1.0.10) Borg used a `"merge upwards"
|
||||
<https://git-scm.com/docs/gitworkflows#_merging_upwards>`_ model where
|
||||
most minor changes and fixes where committed to a maintenance branch
|
||||
(eg. 1.0-maint), and the maintenance branch(es) were regularly merged
|
||||
most minor changes and fixes were committed to a maintenance branch
|
||||
(e.g. 1.0-maint), and the maintenance branch(es) were regularly merged
|
||||
back into the main development branch. This became more and more
|
||||
troublesome due to merges growing more conflict-heavy and error-prone.
|
||||
|
||||
|
|
|
|||
50
docs/faq.rst
50
docs/faq.rst
|
|
@ -8,19 +8,19 @@ Frequently asked questions
|
|||
Usage & Limitations
|
||||
###################
|
||||
|
||||
What is the difference between a repo on an external hard drive vs. repo on a server?
|
||||
-------------------------------------------------------------------------------------
|
||||
What is the difference between a repository on an external hard drive and a repository on a server?
|
||||
---------------------------------------------------------------------------------------------------
|
||||
|
||||
If Borg is running in client/server mode, the client uses SSH as a transport to
|
||||
talk to the remote agent, which is another Borg process (Borg is installed on
|
||||
the server, too) started automatically by the client. The Borg server is doing
|
||||
storage-related low-level repo operations (list, load and store objects),
|
||||
talk to a remote agent, which is another Borg process (Borg is also installed on
|
||||
the server) started automatically by the client. The Borg server performs
|
||||
storage-related, low-level repository operations (list, load, and store objects),
|
||||
while the Borg client does the high-level stuff: deduplication, encryption,
|
||||
compression, dealing with archives, backups, restores, etc., which reduces the
|
||||
amount of data that goes over the network.
|
||||
|
||||
When Borg is writing to a repo on a locally mounted remote file system, e.g.
|
||||
SSHFS, the Borg client only can do file system operations and has no agent
|
||||
When Borg is writing to a repo on a locally mounted remote filesystem, e.g.
|
||||
SSHFS, the Borg client can only perform filesystem operations and has no agent
|
||||
running on the remote side, so *every* operation needs to go over the network,
|
||||
which is slower.
|
||||
|
||||
|
|
@ -29,7 +29,7 @@ Can I back up from multiple servers into a single repository?
|
|||
|
||||
Yes, you can! Even simultaneously.
|
||||
|
||||
Can I back up to multiple, swapped backup targets?
|
||||
Can I back up to multiple swapped backup targets?
|
||||
--------------------------------------------------
|
||||
|
||||
It is possible to swap your backup disks if each backup medium is assigned its
|
||||
|
|
@ -43,41 +43,41 @@ locations), the recommended way to do that is like this:
|
|||
|
||||
- ``borg repo-create repo1 --encryption=X``
|
||||
- ``borg repo-create repo2 --encryption=X --other-repo=repo1``
|
||||
- maybe do a snapshot to have stable and same input data for both borg create.
|
||||
- Optionally, create a snapshot to have stable and identical input data for both borg create runs.
|
||||
- client machine ---borg create---> repo1
|
||||
- client machine ---borg create---> repo2
|
||||
|
||||
This will create distinct (different repo ID), but related repositories.
|
||||
This will create distinct (different repository ID) but related repositories.
|
||||
Related means using the same chunker secret and the same id_key, thus producing
|
||||
the same chunks / the same chunk ids if the input data is the same.
|
||||
the same chunks/the same chunk IDs if the input data is the same.
|
||||
|
||||
The 2 independent borg create invocations mean that there is no error propagation
|
||||
The two independent borg create invocations mean there is no error propagation
|
||||
from repo1 to repo2 when done like that.
|
||||
|
||||
An alternative way would be to use ``borg transfer`` to copy backup archives
|
||||
from repo1 to repo2. Likely a bit more efficient and the archives would be identical,
|
||||
but suffering from potential error propagation.
|
||||
An alternative is to use ``borg transfer`` to copy backup archives
|
||||
from repo1 to repo2. This is likely a bit more efficient and the archives would be identical,
|
||||
but it may suffer from potential error propagation.
|
||||
|
||||
Warning: using borg with multiple repositories with identical repository ID (like when
|
||||
creating 1:1 repository copies) is not supported and can lead to all sorts of issues,
|
||||
like e.g. cache coherency issues, malfunction, data corruption.
|
||||
Warning: Using Borg with multiple repositories that have identical repository IDs (such as
|
||||
creating 1:1 repository copies) is not supported and can lead to various issues,
|
||||
for example cache coherency issues, malfunction, or data corruption.
|
||||
|
||||
"this is either an attack or unsafe" warning
|
||||
--------------------------------------------
|
||||
|
||||
About the warning:
|
||||
|
||||
Cache, or information obtained from the security directory is newer than
|
||||
repository - this is either an attack or unsafe (multiple repos with same ID)
|
||||
Cache or information obtained from the security directory is newer than the
|
||||
repository — this is either an attack or unsafe (multiple repositories with the same ID)
|
||||
|
||||
"unsafe": If not following the advice from the previous section, you can easily
|
||||
run into this by yourself by restoring an older copy of your repository.
|
||||
|
||||
"attack": maybe an attacker has replaced your repo by an older copy, trying to
|
||||
trick you into AES counter reuse, trying to break your repo encryption.
|
||||
"attack": An attacker may have replaced your repo with an older copy, trying to
|
||||
trigger AES counter reuse and break your repo encryption.
|
||||
|
||||
Borg users have also reported that fs issues (like hw issues / I/O errors causing
|
||||
the fs to become read-only) can cause this warning, see :issue:`7853`.
|
||||
Borg users have also reported that file system (fs) issues (e.g., hardware issues or I/O errors causing
|
||||
the file system to become read-only) can cause this warning, see :issue:`7853`.
|
||||
|
||||
If you decide to ignore this and accept unsafe operation for this repository,
|
||||
you could delete the manifest-timestamp and the local cache:
|
||||
|
|
@ -88,7 +88,7 @@ you could delete the manifest-timestamp and the local cache:
|
|||
rm ~/.config/borg/security/REPO_ID/manifest-timestamp
|
||||
borg repo-delete --cache-only
|
||||
|
||||
This is an unsafe and unsupported way to use borg, you have been warned.
|
||||
This is an unsafe and unsupported way to use Borg. You have been warned.
|
||||
|
||||
Which file types, attributes, etc. are *not* preserved?
|
||||
-------------------------------------------------------
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ Borg Documentation
|
|||
|
||||
.. include:: ../README.rst
|
||||
|
||||
.. when you add an element here, do not forget to add it to book.rst
|
||||
.. When you add an element here, do not forget to add it to book.rst.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
|
|
|||
|
|
@ -82,9 +82,9 @@ Ubuntu `Ubuntu packages`_, `Ubuntu PPA`_ ``apt install borgbac
|
|||
.. _Ubuntu packages: https://launchpad.net/ubuntu/+source/borgbackup
|
||||
.. _Ubuntu PPA: https://launchpad.net/~costamagnagianfranco/+archive/ubuntu/borgbackup
|
||||
|
||||
Please ask package maintainers to build a package or, if you can package /
|
||||
Please ask package maintainers to build a package or, if you can package/
|
||||
submit it yourself, please help us with that! See :issue:`105` on
|
||||
github to followup on packaging efforts.
|
||||
GitHub to follow up on packaging efforts.
|
||||
|
||||
**Current status of package in the repositories**
|
||||
|
||||
|
|
@ -106,12 +106,12 @@ Standalone Binary
|
|||
.. note:: Releases are signed with an OpenPGP key, see
|
||||
:ref:`security-contact` for more instructions.
|
||||
|
||||
Borg x86/x64 amd/intel compatible binaries (generated with `pyinstaller`_)
|
||||
Borg x86/x64 AMD/Intel compatible binaries (generated with `pyinstaller`_)
|
||||
are available on the releases_ page for the following platforms:
|
||||
|
||||
* **Linux**: glibc >= 2.28 (ok for most supported Linux releases).
|
||||
Older glibc releases are untested and may not work.
|
||||
* **MacOS**: 10.12 or newer (To avoid signing issues download the file via
|
||||
* **macOS**: 10.12 or newer (To avoid signing issues, download the file via
|
||||
command line **or** remove the ``quarantine`` attribute after downloading:
|
||||
``$ xattr -dr com.apple.quarantine borg-macosx64.tgz``)
|
||||
* **FreeBSD**: 12.1 (unknown whether it works for older releases)
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
Internals
|
||||
=========
|
||||
|
||||
The internals chapter describes and analyses most of the inner workings
|
||||
The internals chapter describes and analyzes most of the inner workings
|
||||
of Borg.
|
||||
|
||||
Borg uses a low-level, key-value store, the :ref:`repository`, and
|
||||
|
|
@ -20,17 +20,17 @@ Deduplication is performed globally across all data in the repository
|
|||
(multiple backups and even multiple hosts), both on data and file
|
||||
metadata, using :ref:`chunks` created by the chunker using the
|
||||
Buzhash_ algorithm ("buzhash" and "buzhash64" chunker) or a simpler
|
||||
fixed blocksize algorithm ("fixed" chunker).
|
||||
fixed block size algorithm ("fixed" chunker).
|
||||
|
||||
To perform the repository-wide deduplication, a hash of each
|
||||
chunk is checked against the :ref:`chunks cache <cache>`, which is a
|
||||
hash-table of all chunks that already exist.
|
||||
hash table of all chunks that already exist.
|
||||
|
||||
.. figure:: internals/structure.png
|
||||
:figwidth: 100%
|
||||
:width: 100%
|
||||
|
||||
Layers in Borg. On the very top commands are implemented, using
|
||||
Layers in Borg. At the very top, commands are implemented, using
|
||||
a data access layer provided by the Archive and Item classes.
|
||||
The "key" object provides both compression and authenticated
|
||||
encryption used by the data access layer. The "key" object represents
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ Keys
|
|||
~~~~
|
||||
|
||||
Repository object IDs (which are used as key into the key-value store) are
|
||||
byte-strings of fixed length (256bit, 32 bytes), computed like this::
|
||||
byte strings of fixed length (256-bit, 32 bytes), computed like this::
|
||||
|
||||
key = id = id_hash(plaintext_data) # plain = not encrypted, not compressed, not obfuscated
|
||||
|
||||
|
|
@ -79,10 +79,10 @@ Each repository object is stored separately, under its ID into data/xx/yy/xxyy..
|
|||
|
||||
A repo object has a structure like this:
|
||||
|
||||
* 32bit meta size
|
||||
* 32bit data size
|
||||
* 64bit xxh64(meta)
|
||||
* 64bit xxh64(data)
|
||||
* 32-bit meta size
|
||||
* 32-bit data size
|
||||
* 64-bit xxh64(meta)
|
||||
* 64-bit xxh64(data)
|
||||
* meta
|
||||
* data
|
||||
|
||||
|
|
@ -90,8 +90,8 @@ The size and xxh64 hashes can be used for server-side corruption checks without
|
|||
needing to decrypt anything (which would require the borg key).
|
||||
|
||||
The overall size of repository objects varies from very small (a small source
|
||||
file will be stored as a single repo object) to medium (big source files will
|
||||
be cut into medium sized chunks of some MB).
|
||||
file will be stored as a single repository object) to medium (big source files will
|
||||
be cut into medium-sized chunks of some MB).
|
||||
|
||||
Metadata and data are separately encrypted and authenticated (depending on
|
||||
the user's choices).
|
||||
|
|
@ -102,7 +102,7 @@ encryption.
|
|||
Repo object metadata
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Metadata is a msgpacked (and encrypted/authenticated) dict with:
|
||||
Metadata is a MessagePack-encoded (and encrypted/authenticated) dict with:
|
||||
|
||||
- ctype (compression type 0..255)
|
||||
- clevel (compression level 0..255)
|
||||
|
|
|
|||
|
|
@ -11,8 +11,8 @@ but does mean that there are no release-to-release guarantees on what you might
|
|||
even for point releases (1.1.x), and there is no documentation beyond the code and the internals documents.
|
||||
|
||||
Borg does on the other hand provide an API on a command-line level. In other words, a frontend should
|
||||
(for example) create a backup archive just invoke :ref:`borg_create`, give commandline parameters/options
|
||||
as needed and parse JSON output from borg.
|
||||
(for example) create a backup archive by invoking :ref:`borg_create`, provide command-line parameters/options
|
||||
as needed, and parse JSON output from Borg.
|
||||
|
||||
Important: JSON output is expected to be UTF-8, but currently borg depends on the locale being configured
|
||||
for that (must be a UTF-8 locale and *not* "C" or "ascii"), so that Python will choose to encode to UTF-8.
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ The attack model of Borg is that the environment of the client process
|
|||
attacker has any and all access to the repository, including interactive
|
||||
manipulation (man-in-the-middle) for remote repositories.
|
||||
|
||||
Furthermore the client environment is assumed to be persistent across
|
||||
Furthermore, the client environment is assumed to be persistent across
|
||||
attacks (practically this means that the security database cannot be
|
||||
deleted between attacks).
|
||||
|
||||
|
|
@ -37,8 +37,8 @@ Under these circumstances Borg guarantees that the attacker cannot
|
|||
structural information such as the object graph (which archives
|
||||
refer to what chunks)
|
||||
|
||||
The attacker can always impose a denial of service per definition (he could
|
||||
forbid connections to the repository, or delete it partly or entirely).
|
||||
The attacker can always impose a denial of service by definition (they could
|
||||
block connections to the repository, or delete it partly or entirely).
|
||||
|
||||
|
||||
.. _security_structural_auth:
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
Introduction
|
||||
============
|
||||
|
||||
.. this shim is here to fix the structure in the PDF
|
||||
rendering. without this stub, the elements in the toctree of
|
||||
index.rst show up a level below the README file included
|
||||
.. This shim is here to fix the structure in the PDF
|
||||
rendering. Without this stub, the elements in the toctree of
|
||||
index.rst show up a level below the README file included.
|
||||
|
||||
.. include:: ../README.rst
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ DESCRIPTION
|
|||
BorgBackup (short: Borg) is a deduplicating backup program.
|
||||
Optionally, it supports compression and authenticated encryption.
|
||||
|
||||
The main goal of Borg is to provide an efficient and secure way to back data up.
|
||||
The main goal of Borg is to provide an efficient and secure way to back up data.
|
||||
The data deduplication technique used makes Borg suitable for daily backups
|
||||
since only changes are stored.
|
||||
The authenticated encryption technique makes it suitable for backups to targets not
|
||||
|
|
@ -22,7 +22,7 @@ fully trusted.
|
|||
Borg stores a set of files in an *archive*. A *repository* is a collection
|
||||
of *archives*. The format of repositories is Borg-specific. Borg does not
|
||||
distinguish archives from each other in any way other than their name,
|
||||
it does not matter when or where archives were created (e.g. different hosts).
|
||||
it does not matter when or where archives were created (e.g., different hosts).
|
||||
|
||||
EXAMPLES
|
||||
--------
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
borg benchmark crud
|
||||
===================
|
||||
|
||||
Here is some example of borg benchmark crud output.
|
||||
Here is an example of borg benchmark crud output.
|
||||
|
||||
I ran it on my laptop, Core i5-4200u, 8GB RAM, SATA SSD, Linux, ext4 fs.
|
||||
I ran it on my laptop: Core i5-4200U, 8 GB RAM, SATA SSD, Linux, ext4 filesystem.
|
||||
"src" as well as repo is local, on this SSD.
|
||||
|
||||
$ BORG_PASSPHRASE=secret borg init --encryption repokey-blake2 repo
|
||||
|
|
|
|||
|
|
@ -3,53 +3,53 @@ About borg create --chunker-params
|
|||
|
||||
--chunker-params CHUNK_MIN_EXP,CHUNK_MAX_EXP,HASH_MASK_BITS,HASH_WINDOW_SIZE
|
||||
|
||||
CHUNK_MIN_EXP and CHUNK_MAX_EXP give the exponent N of the 2^N minimum and
|
||||
maximum chunk size. Required: CHUNK_MIN_EXP < CHUNK_MAX_EXP.
|
||||
CHUNK_MIN_EXP and CHUNK_MAX_EXP provide the exponent N for the 2^N minimum and
|
||||
maximum chunk sizes. Required: CHUNK_MIN_EXP < CHUNK_MAX_EXP.
|
||||
|
||||
Defaults: 19 (2^19 == 512KiB) minimum, 23 (2^23 == 8MiB) maximum.
|
||||
Currently it is not supported to give more than 23 as maximum.
|
||||
Currently, specifying a maximum greater than 23 is not supported.
|
||||
|
||||
HASH_MASK_BITS is the number of least-significant bits of the rolling hash
|
||||
that need to be zero to trigger a chunk cut.
|
||||
Recommended: CHUNK_MIN_EXP + X <= HASH_MASK_BITS <= CHUNK_MAX_EXP - X, X >= 2
|
||||
(this allows the rolling hash some freedom to make its cut at a place
|
||||
determined by the windows contents rather than the min/max. chunk size).
|
||||
determined by the window's contents rather than the minimum/maximum chunk size).
|
||||
|
||||
Default: 21 (statistically, chunks will be about 2^21 == 2MiB in size)
|
||||
Default: 21 (statistically, chunks will be about 2^21 == 2MiB in size).
|
||||
|
||||
HASH_WINDOW_SIZE: the size of the window used for the rolling hash computation.
|
||||
Must be an odd number. Default: 4095B
|
||||
Must be an odd number. Default: 4095B.
|
||||
|
||||
|
||||
Trying it out
|
||||
=============
|
||||
|
||||
I backed up a VM directory to demonstrate how different chunker parameters
|
||||
influence repo size, index size / chunk count, compression, deduplication.
|
||||
affect repository size, index size/chunk count, compression, and deduplication.
|
||||
|
||||
repo-sm: ~64kiB chunks (16 bits chunk mask), min chunk size 1kiB (2^10B)
|
||||
(these are attic / borg 0.23 internal defaults)
|
||||
repo-sm: ~64 KiB chunks (16 bits chunk mask), min chunk size 1 KiB (2^10B)
|
||||
(these are Attic/Borg 0.23 internal defaults)
|
||||
|
||||
repo-lg: ~1MiB chunks (20 bits chunk mask), min chunk size 64kiB (2^16B)
|
||||
repo-lg: ~1MiB chunks (20 bits chunk mask), min chunk size 64 KiB (2^16B)
|
||||
|
||||
repo-xl: 8MiB chunks (2^23B max chunk size), min chunk size 64kiB (2^16B).
|
||||
The chunk mask bits was set to 31, so it (almost) never triggers.
|
||||
repo-xl: 8MiB chunks (2^23B max chunk size), min chunk size 64 KiB (2^16B).
|
||||
The chunk mask bits were set to 31, so it (almost) never triggers.
|
||||
This degrades the rolling hash based dedup to a fixed-offset dedup
|
||||
as the cutting point is now (almost) always the end of the buffer
|
||||
(at 2^23B == 8MiB).
|
||||
|
||||
The repo index size is an indicator for the RAM needs of Borg.
|
||||
In this special case, the total RAM needs are about 2.1x the repo index size.
|
||||
You see index size of repo-sm is 16x larger than of repo-lg, which corresponds
|
||||
You see the index size of repo-sm is 16x larger than that of repo-lg, which corresponds
|
||||
to the ratio of the different target chunk sizes.
|
||||
|
||||
Note: RAM needs were not a problem in this specific case (37GB data size).
|
||||
But just imagine, you have 37TB of such data and much less than 42GB RAM,
|
||||
then you should use the "lg" chunker params so you only need
|
||||
2.6GB RAM. Or even bigger chunks than shown for "lg" (see "xl").
|
||||
But imagine you have 37TB of such data and much less than 42GB RAM,
|
||||
then you should use the "lg" chunker parameters so you need only
|
||||
2.6GB RAM. Or even larger chunks than shown for "lg" (see "xl").
|
||||
|
||||
You also see compression works better for larger chunks, as expected.
|
||||
Duplication works worse for larger chunks, also as expected.
|
||||
Deduplication works worse for larger chunks, also as expected.
|
||||
|
||||
small chunks
|
||||
============
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
BorgBackup from 10.000m
|
||||
BorgBackup from 10,000 m
|
||||
=======================
|
||||
|
||||
+--------+ +--------+ +--------+
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
borg prune visualized
|
||||
=====================
|
||||
|
||||
Assume it is 2016-01-01, today's backup has not yet been made, you have
|
||||
Assume it is 2016-01-01. Today's backup has not yet been made. You have
|
||||
created at least one backup on each day in 2015 except on 2015-12-19 (no
|
||||
backup made on that day), and you started backing up with borg on
|
||||
backup was made on that day), and you started backing up with Borg on
|
||||
2015-01-01.
|
||||
|
||||
This is what borg prune --keep-daily 14 --keep-monthly 6 --keep-yearly 1
|
||||
|
|
@ -91,17 +91,17 @@ The --keep-monthly 6 rule keeps Nov, Oct, Sep, Aug, Jul and Jun. December is
|
|||
not considered for this rule, because that backup was already kept because of
|
||||
the daily rule.
|
||||
|
||||
2015-12-17 is kept to satisfy the --keep-daily 14 rule - because no backup was
|
||||
2015-12-17 is kept to satisfy the --keep-daily 14 rule, because no backup was
|
||||
made on 2015-12-19. If a backup had been made on that day, it would not keep
|
||||
the one from 2015-12-17.
|
||||
|
||||
We did not include weekly, hourly, minutely or secondly rules to keep this
|
||||
We did not include weekly, hourly, minutely, or secondly rules to keep this
|
||||
example simple. They all work in basically the same way.
|
||||
|
||||
The weekly rule is easy to understand roughly, but hard to understand in all
|
||||
details. If interested, read "ISO 8601:2000 standard week-based year".
|
||||
details. If you are interested, read "ISO 8601:2000 standard week-based year".
|
||||
|
||||
The 13weekly and 3monthly rules are two different strategies for keeping one
|
||||
The 13weekly and 3monthly rules are two different strategies for keeping one backup
|
||||
every quarter of a year. There are `multiple ways` to define a quarter-year;
|
||||
borg prune recognizes two:
|
||||
|
||||
|
|
@ -113,12 +113,12 @@ borg prune recognizes two:
|
|||
March 31, April 1st to June 30th, July 1st to September 30th, and October 1st
|
||||
to December 31st form the quarters.
|
||||
|
||||
If the subtleties of the definition of a quarter year don't matter to you, a
|
||||
If the subtleties of the definition of a quarter-year don't matter to you, a
|
||||
short summary of behavior is:
|
||||
|
||||
* --keep-13weekly favors keeping backups at the beginning of Jan, Apr, July,
|
||||
* --keep-13weekly favors keeping backups at the beginning of Jan, Apr, Jul,
|
||||
and Oct.
|
||||
* --keep-3monthly favors keeping backups at the end of Dec, Mar, Jun, and Sept.
|
||||
* --keep-3monthly favors keeping backups at the end of Dec, Mar, Jun, and Sep.
|
||||
* Both strategies will have some overlap in which backups are kept.
|
||||
* The differences are negligible unless backups considered for deletion were
|
||||
created weekly or more frequently.
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Quick Start
|
|||
|
||||
This chapter will get you started with Borg and covers various use cases.
|
||||
|
||||
A step by step example
|
||||
A step-by-step example
|
||||
----------------------
|
||||
|
||||
.. include:: quickstart_example.rst.inc
|
||||
|
|
@ -20,10 +20,10 @@ stores a snapshot of the data of the files "inside" it. One can later extract or
|
|||
mount an archive to restore from a backup.
|
||||
|
||||
*Repositories* are filesystem directories acting as self-contained stores of archives.
|
||||
Repositories can be accessed locally via path or remotely via ssh. Under the hood,
|
||||
Repositories can be accessed locally via path or remotely via SSH. Under the hood,
|
||||
repositories contain data blocks and a manifest that tracks which blocks are in each
|
||||
archive. If some data hasn't changed between backups, Borg simply
|
||||
references an already uploaded data chunk (deduplication).
|
||||
references an already-uploaded data chunk (deduplication).
|
||||
|
||||
.. _about_free_space:
|
||||
|
||||
|
|
@ -43,7 +43,7 @@ in your backup log files (you check them regularly anyway, right?).
|
|||
|
||||
Also helpful:
|
||||
|
||||
- use `borg repo-space` to reserve some disk space that can be freed when the fs
|
||||
- use `borg repo-space` to reserve some disk space that can be freed when the filesystem
|
||||
does not have free space any more.
|
||||
- if you use LVM: use a LV + a filesystem that you can resize later and have
|
||||
some unallocated PEs you can add to the LV.
|
||||
|
|
@ -64,8 +64,8 @@ If you only back up your own files, run it as your normal user (i.e. not root).
|
|||
|
||||
For a local repository always use the same user to invoke borg.
|
||||
|
||||
For a remote repository: always use e.g. ssh://borg@remote_host. You can use this
|
||||
from different local users, the remote user running borg and accessing the
|
||||
For a remote repository: always use e.g., ssh://borg@remote_host. You can use this
|
||||
from different local users; the remote user running borg and accessing the
|
||||
repo will always be `borg`.
|
||||
|
||||
If you need to access a local repository from different users, you can use the
|
||||
|
|
|
|||
|
|
@ -7,12 +7,12 @@
|
|||
|
||||
$ borg -r /path/to/repo create docs ~/src ~/Documents
|
||||
|
||||
3. The next day create a new archive using the same archive name::
|
||||
3. The next day, create a new archive using the same archive name::
|
||||
|
||||
$ borg -r /path/to/repo create --stats docs ~/src ~/Documents
|
||||
|
||||
This backup will be a lot quicker and a lot smaller since only new, never
|
||||
before seen data is stored. The ``--stats`` option causes Borg to
|
||||
This backup will be much quicker and much smaller, since only new,
|
||||
never-before-seen data is stored. The ``--stats`` option causes Borg to
|
||||
output statistics about the newly created archive such as the deduplicated
|
||||
size (the amount of unique data not shared with other archives)::
|
||||
|
||||
|
|
@ -22,7 +22,7 @@
|
|||
Time (start): Sat, 2022-06-25 20:21:43
|
||||
Time (end): Sat, 2022-06-25 20:21:43
|
||||
Duration: 0.07 seconds
|
||||
Utilization of max. archive size: 0%
|
||||
Utilization of maximum archive size: 0%
|
||||
Number of files: 699
|
||||
Original size: 31.14 MB
|
||||
Deduplicated size: 502 B
|
||||
|
|
@ -44,14 +44,14 @@
|
|||
|
||||
$ borg -r /path/to/repo extract aid:b80e24d2
|
||||
|
||||
7. Delete the first archive (please note that this does **not** free repo disk space)::
|
||||
7. Delete the first archive (please note that this does **not** immediately free repository disk space)::
|
||||
|
||||
$ borg -r /path/to/repo delete aid:b80e24d2
|
||||
|
||||
Be careful if you use an archive NAME (and not an archive ID), that might match multiple archives!
|
||||
Always first use with ``--dry-run`` and ``--list``!
|
||||
Be careful if you use an archive NAME (and not an archive ID), as it might match multiple archives.
|
||||
Always use ``--dry-run`` and ``--list`` first!
|
||||
|
||||
8. Recover disk space by compacting the segment files in the repo::
|
||||
8. Recover disk space by compacting the segment files in the repository::
|
||||
|
||||
$ borg -r /path/to/repo compact -v
|
||||
|
||||
|
|
|
|||
|
|
@ -9,11 +9,11 @@ Usage
|
|||
Redirecting...
|
||||
|
||||
<script type="text/javascript">
|
||||
// Fixes old links which were just anchors
|
||||
// Fixes old links that were just anchor fragments
|
||||
var hash = window.location.hash.substring(1);
|
||||
|
||||
// usage.html is empty, no content. Purely serves to implement a "correct" toctree
|
||||
// due to rST/Sphinx limitations. Refer to https://github.com/sphinx-doc/sphinx/pull/3622
|
||||
// usage.html is empty; it contains no content. It purely serves to implement a "correct" toctree
|
||||
// due to reST/Sphinx limitations. Refer to https://github.com/sphinx-doc/sphinx/pull/3622
|
||||
|
||||
// Redirect to general docs
|
||||
if(hash == "") {
|
||||
|
|
|
|||
|
|
@ -13,9 +13,9 @@
|
|||
--show-rc show/log the return code (rc)
|
||||
--umask M set umask to M (local only, default: 0077)
|
||||
--remote-path PATH use PATH as borg executable on the remote (default: "borg")
|
||||
--upload-ratelimit RATE set network upload rate limit in kiByte/s (default: 0=unlimited)
|
||||
--upload-ratelimit RATE set network upload rate limit in KiB/s (default: 0=unlimited)
|
||||
--upload-buffer UPLOAD_BUFFER set network upload buffer size in MiB. (default: 0=no buffer)
|
||||
--debug-profile FILE Write execution profile in Borg format into FILE. For local use a Python-compatible file can be generated by suffixing FILE with ".pyprof".
|
||||
--rsh RSH Use this command to connect to the 'borg serve' process (default: 'ssh')
|
||||
--socket PATH Use UNIX DOMAIN (IPC) socket at PATH for client/server communication with socket: protocol.
|
||||
--socket PATH Use UNIX domain (IPC) socket at PATH for client/server communication with the socket: protocol.
|
||||
-r REPO, --repo REPO repository to use
|
||||
|
|
|
|||
|
|
@ -4,6 +4,6 @@ Examples
|
|||
~~~~~~~~
|
||||
::
|
||||
|
||||
# compact segments and free repo disk space
|
||||
# Compact segments and free repository disk space
|
||||
$ borg compact
|
||||
|
||||
|
|
|
|||
|
|
@ -23,14 +23,14 @@ Examples
|
|||
# /home/<one directory>/.thumbnails is excluded, not /home/*/*/.thumbnails etc.)
|
||||
$ borg create my-files /home --exclude 'sh:home/*/.thumbnails'
|
||||
|
||||
# Backup the root filesystem into an archive named "root-archive"
|
||||
# use zlib compression (good, but slow) - default is lz4 (fast, low compression ratio)
|
||||
# Back up the root filesystem into an archive named "root-archive"
|
||||
# Use zlib compression (good, but slow) — default is LZ4 (fast, low compression ratio)
|
||||
$ borg create -C zlib,6 --one-file-system root-archive /
|
||||
|
||||
# Backup into an archive name like FQDN-root
|
||||
$ borg create '{fqdn}-root' /
|
||||
|
||||
# Backup a remote host locally ("pull" style) using sshfs
|
||||
# Back up a remote host locally ("pull" style) using SSHFS
|
||||
$ mkdir sshfs-mount
|
||||
$ sshfs root@example.com:/ sshfs-mount
|
||||
$ cd sshfs-mount
|
||||
|
|
@ -38,8 +38,8 @@ Examples
|
|||
$ cd ..
|
||||
$ fusermount -u sshfs-mount
|
||||
|
||||
# Make a big effort in fine granular deduplication (big chunk management
|
||||
# overhead, needs a lot of RAM and disk space, see formula in internals docs):
|
||||
# Make a big effort in fine-grained deduplication (big chunk management
|
||||
# overhead, needs a lot of RAM and disk space; see the formula in the internals docs):
|
||||
$ borg create --chunker-params buzhash,10,23,16,4095 small /smallstuff
|
||||
|
||||
# Backup a raw device (must not be active/in use/mounted at that time)
|
||||
|
|
@ -63,16 +63,16 @@ Examples
|
|||
# Only compress compressible data with lzma,N (N = 0..9)
|
||||
$ borg create --compression auto,lzma,N arch ~
|
||||
|
||||
# Use short hostname and user name as archive name
|
||||
# Use the short hostname and username as the archive name
|
||||
$ borg create '{hostname}-{user}' ~
|
||||
|
||||
# Backing up relative paths by moving into the correct directory first
|
||||
# Back up relative paths by moving into the correct directory first
|
||||
$ cd /home/user/Documents
|
||||
# The root directory of the archive will be "projectA"
|
||||
$ borg create 'daily-projectA' projectA
|
||||
|
||||
# Use external command to determine files to archive
|
||||
# Use --paths-from-stdin with find to back up only files less than 1MB in size
|
||||
# Use --paths-from-stdin with find to back up only files less than 1 MB in size
|
||||
$ find ~ -size -1000k | borg create --paths-from-stdin small-files-only
|
||||
# Use --paths-from-command with find to back up files from only a given user
|
||||
$ borg create --paths-from-command joes-files -- find /srv/samba/shared -user joe
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ There is a ``borg debug`` command that has some subcommands which are all
|
|||
**not intended for normal use** and **potentially very dangerous** if used incorrectly.
|
||||
|
||||
For example, ``borg debug put-obj`` and ``borg debug delete-obj`` will only do
|
||||
what their name suggests: put objects into repo / delete objects from repo.
|
||||
what their name suggests: put objects into the repository / delete objects from the repository.
|
||||
|
||||
Please note:
|
||||
|
||||
|
|
@ -30,5 +30,5 @@ or unauthenticated source.").
|
|||
The ``borg debug profile-convert`` command can be used to take a Borg profile and convert
|
||||
it to a profile file that is compatible with the Python tools.
|
||||
|
||||
Additionally, if the filename specified for ``--debug-profile`` ends with ".pyprof" a
|
||||
Python compatible profile is generated. This is only intended for local use by developers.
|
||||
Additionally, if the filename specified for ``--debug-profile`` ends with ".pyprof", a
|
||||
Python-compatible profile is generated. This is only intended for local use by developers.
|
||||
|
|
|
|||
|
|
@ -4,20 +4,20 @@ Examples
|
|||
~~~~~~~~
|
||||
::
|
||||
|
||||
# delete all backup archives named "kenny-files":
|
||||
# Delete all backup archives named "kenny-files":
|
||||
$ borg delete -a kenny-files
|
||||
# actually free disk space:
|
||||
# Actually free disk space:
|
||||
$ borg compact
|
||||
|
||||
# delete a specific backup archive using its unique archive ID prefix
|
||||
# Delete a specific backup archive using its unique archive ID prefix
|
||||
$ borg delete aid:d34db33f
|
||||
|
||||
# delete all archives whose names begin with the machine's hostname followed by "-"
|
||||
# Delete all archives whose names begin with the machine's hostname followed by "-"
|
||||
$ borg delete -a 'sh:{hostname}-*'
|
||||
|
||||
# delete all archives whose names contain "-2012-"
|
||||
# Delete all archives whose names contain "-2012-"
|
||||
$ borg delete -a 'sh:*-2012-*'
|
||||
|
||||
# see what would be deleted if delete was run without --dry-run
|
||||
# See what would be deleted if delete was run without --dry-run
|
||||
$ borg delete --list --dry-run -a 'sh:*-May-*'
|
||||
|
||||
|
|
|
|||
|
|
@ -5,9 +5,9 @@ Borg consists of a number of commands. Each command accepts
|
|||
a number of arguments and options and interprets various environment variables.
|
||||
The following sections will describe each command in detail.
|
||||
|
||||
Commands, options, parameters, paths and such are ``set in fixed-width``.
|
||||
Option values are `underlined`. Borg has few options accepting a fixed set
|
||||
of values (e.g. ``--encryption`` of :ref:`borg_repo-create`).
|
||||
Commands, options, parameters, paths, and similar elements are shown in fixed-width.
|
||||
Option values are underlined. Borg has a few options that accept a fixed set
|
||||
of values (e.g., ``--encryption`` of :ref:`borg_repo-create`).
|
||||
|
||||
.. container:: experimental
|
||||
|
||||
|
|
@ -18,7 +18,7 @@ of values (e.g. ``--encryption`` of :ref:`borg_repo-create`).
|
|||
|
||||
.. include:: usage_general.rst.inc
|
||||
|
||||
In case you are interested in more details (like formulas), please see
|
||||
If you are interested in more details (such as formulas), see
|
||||
:ref:`internals`. For details on the available JSON output, refer to
|
||||
:ref:`json_output`.
|
||||
|
||||
|
|
@ -32,8 +32,8 @@ All Borg commands share these options:
|
|||
.. include:: common-options.rst.inc
|
||||
|
||||
Option ``--help`` when used as a command works as expected on subcommands (e.g., ``borg help compact``).
|
||||
But it does not work when the help command is being used on sub-sub-commands (e.g., ``borg help key export``).
|
||||
The workaround for this to use the help command as a flag (e.g., ``borg key export --help``).
|
||||
But it does not work when the help command is used on sub-sub-commands (e.g., ``borg help key export``).
|
||||
The workaround for this is to use the help command as a flag (e.g., ``borg key export --help``).
|
||||
|
||||
Examples
|
||||
~~~~~~~~
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
Date and Time
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
We format date and time conforming to ISO-8601, that is: YYYY-MM-DD and
|
||||
HH:MM:SS (24h clock).
|
||||
We format date and time in accordance with ISO 8601, that is: YYYY-MM-DD and
|
||||
HH:MM:SS (24-hour clock).
|
||||
|
||||
For more information about that, see: https://xkcd.com/1179/
|
||||
For more information, see: https://xkcd.com/1179/
|
||||
|
||||
Unless otherwise noted, we display local date and time.
|
||||
Internally, we store and process date and time as UTC.
|
||||
|
|
@ -14,5 +14,5 @@ Internally, we store and process date and time as UTC.
|
|||
|
||||
Some options accept a TIMESPAN parameter, which can be given as a number of
|
||||
years (e.g. ``2y``), months (e.g. ``12m``), weeks (e.g. ``2w``),
|
||||
days (e.g. ``7d``), hours (e.g. ``8H``), minutes (e.g. ``30M``),
|
||||
or seconds (e.g. ``150S``).
|
||||
days (e.g. ``7d``), hours (e.g. ``8h``), minutes (e.g. ``30m``),
|
||||
or seconds (e.g. ``150s``).
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ General:
|
|||
When set, use the value to answer the passphrase question when a **new** passphrase is asked for.
|
||||
This variable is checked first. If it is not set, BORG_PASSPHRASE and BORG_PASSCOMMAND will also
|
||||
be checked.
|
||||
Main usecase for this is to automate fully ``borg change-passphrase``.
|
||||
Main use case for this is to fully automate ``borg change-passphrase``.
|
||||
BORG_DISPLAY_PASSPHRASE
|
||||
When set, use the value to answer the "display the passphrase for verification" question when defining a new passphrase for encrypted repositories.
|
||||
BORG_DEBUG_PASSPHRASE
|
||||
|
|
@ -46,22 +46,22 @@ General:
|
|||
Borg usually computes a host id from the FQDN plus the results of ``uuid.getnode()`` (which usually returns
|
||||
a unique id based on the MAC address of the network interface. Except if that MAC happens to be all-zero - in
|
||||
that case it returns a random value, which is not what we want (because it kills automatic stale lock removal).
|
||||
So, if you have a all-zero MAC address or other reasons to control better externally the host id, just set this
|
||||
So, if you have an all-zero MAC address or other reasons to better control the host id externally, just set this
|
||||
environment variable to a unique value. If all your FQDNs are unique, you can just use the FQDN. If not,
|
||||
use fqdn@uniqueid.
|
||||
use FQDN@uniqueid.
|
||||
BORG_LOCK_WAIT
|
||||
You can set the default value for the ``--lock-wait`` option with this, so
|
||||
you do not need to give it as a commandline option.
|
||||
you do not need to give it as a command line option.
|
||||
BORG_LOGGING_CONF
|
||||
When set, use the given filename as INI_-style logging configuration.
|
||||
A basic example conf can be found at ``docs/misc/logging.conf``.
|
||||
BORG_RSH
|
||||
When set, use this command instead of ``ssh``. This can be used to specify ssh options, such as
|
||||
a custom identity file ``ssh -i /path/to/private/key``. See ``man ssh`` for other options. Using
|
||||
the ``--rsh CMD`` commandline option overrides the environment variable.
|
||||
the ``--rsh CMD`` command line option overrides the environment variable.
|
||||
BORG_REMOTE_PATH
|
||||
When set, use the given path as borg executable on the remote (defaults to "borg" if unset).
|
||||
Using ``--remote-path PATH`` commandline option overrides the environment variable.
|
||||
Using the ``--remote-path PATH`` command line option overrides the environment variable.
|
||||
BORG_REPO_PERMISSIONS
|
||||
Set repository permissions, see also: :ref:`borg_serve`
|
||||
BORG_FILES_CACHE_SUFFIX
|
||||
|
|
@ -79,7 +79,7 @@ General:
|
|||
exceptions is not shown.
|
||||
Please only use for good reasons as it makes issues harder to analyze.
|
||||
BORG_FUSE_IMPL
|
||||
Choose the lowlevel FUSE implementation borg shall use for ``borg mount``.
|
||||
Choose the low-level FUSE implementation borg shall use for ``borg mount``.
|
||||
This is a comma-separated list of implementation names, they are tried in the
|
||||
given order, e.g.:
|
||||
|
||||
|
|
@ -89,15 +89,15 @@ General:
|
|||
- ``llfuse``: only try to load llfuse
|
||||
- ``none``: do not try to load an implementation
|
||||
BORG_SELFTEST
|
||||
This can be used to influence borg's builtin self-tests. The default is to execute the tests
|
||||
This can be used to influence borg's built-in self-tests. The default is to execute the tests
|
||||
at the beginning of each borg command invocation.
|
||||
|
||||
BORG_SELFTEST=disabled can be used to switch off the tests and rather save some time.
|
||||
Disabling is not recommended for normal borg users, but large scale borg storage providers can
|
||||
use this to optimize production servers after at least doing a one-time test borg (with
|
||||
selftests not disabled) when installing or upgrading machines / OS / borg.
|
||||
self-tests not disabled) when installing or upgrading machines/OS/Borg.
|
||||
BORG_WORKAROUNDS
|
||||
A list of comma separated strings that trigger workarounds in borg,
|
||||
A list of comma-separated strings that trigger workarounds in borg,
|
||||
e.g. to work around bugs in other software.
|
||||
|
||||
Currently known strings are:
|
||||
|
|
@ -107,7 +107,7 @@ General:
|
|||
You might need this to run borg on WSL (Windows Subsystem for Linux) or
|
||||
in systemd.nspawn containers on some architectures (e.g. ARM).
|
||||
Using this does not affect data safety, but might result in a more bursty
|
||||
write to disk behaviour (not continuously streaming to disk).
|
||||
write-to-disk behavior (not continuously streaming to disk).
|
||||
|
||||
retry_erofs
|
||||
Retry opening a file without O_NOATIME if opening a file with O_NOATIME
|
||||
|
|
|
|||
|
|
@ -3,15 +3,15 @@ Support for file metadata
|
|||
|
||||
Besides regular file and directory structures, Borg can preserve
|
||||
|
||||
* symlinks (stored as symlink, the symlink is not followed)
|
||||
* symlinks (stored as a symlink; the symlink is not followed)
|
||||
* special files:
|
||||
|
||||
* character and block device files (restored via mknod)
|
||||
* character and block device files (restored via mknod(2))
|
||||
* FIFOs ("named pipes")
|
||||
* special file *contents* can be backed up in ``--read-special`` mode.
|
||||
By default the metadata to create them with mknod(2), mkfifo(2) etc. is stored.
|
||||
* hardlinked regular files, devices, symlinks, FIFOs (considering all items in the same archive)
|
||||
* timestamps in nanosecond precision: mtime, atime, ctime
|
||||
By default, the metadata to create them with mknod(2), mkfifo(2), etc. is stored.
|
||||
* hard-linked regular files, devices, symlinks, FIFOs (considering all items in the same archive)
|
||||
* timestamps with nanosecond precision: mtime, atime, ctime
|
||||
* other timestamps: birthtime (on platforms supporting it)
|
||||
* permissions:
|
||||
|
||||
|
|
@ -42,10 +42,10 @@ On some platforms additional features are supported:
|
|||
| Windows (cygwin) | No [4]_ | No | No |
|
||||
+-------------------------+----------+-----------+------------+
|
||||
|
||||
Other Unix-like operating systems may work as well, but have not been tested at all.
|
||||
Other Unix-like operating systems may work as well, but have not been tested yet.
|
||||
|
||||
Note that most of the platform-dependent features also depend on the file system.
|
||||
For example, ntfs-3g on Linux isn't able to convey NTFS ACLs.
|
||||
Note that most platform-dependent features also depend on the filesystem.
|
||||
For example, ntfs-3g on Linux is not able to convey NTFS ACLs.
|
||||
|
||||
.. [1] Only "nodump", "immutable", "compressed" and "append" are supported.
|
||||
Feature request :issue:`618` for more flags.
|
||||
|
|
@ -54,8 +54,8 @@ For example, ntfs-3g on Linux isn't able to convey NTFS ACLs.
|
|||
.. [4] Cygwin tries to map NTFS ACLs to permissions with varying degrees of success.
|
||||
|
||||
.. [#acls] The native access control list mechanism of the OS. This normally limits access to
|
||||
non-native ACLs. For example, NTFS ACLs aren't completely accessible on Linux with ntfs-3g.
|
||||
.. [#xattr] extended attributes; key-value pairs attached to a file, mainly used by the OS.
|
||||
This includes resource forks on Mac OS X.
|
||||
.. [#flags] aka *BSD flags*. The Linux set of flags [1]_ is portable across platforms.
|
||||
non-native ACLs. For example, NTFS ACLs are not completely accessible on Linux with ntfs-3g.
|
||||
.. [#xattr] Extended attributes; key-value pairs attached to a file, mainly used by the OS.
|
||||
This includes resource forks on macOS.
|
||||
.. [#flags] Also known as *BSD flags*. The Linux set of flags [1]_ is portable across platforms.
|
||||
The BSDs define additional flags.
|
||||
|
|
|
|||
|
|
@ -2,36 +2,36 @@ File systems
|
|||
~~~~~~~~~~~~
|
||||
|
||||
We recommend using a reliable, scalable journaling filesystem for the
|
||||
repository, e.g. zfs, btrfs, ext4, apfs.
|
||||
repository, e.g., zfs, btrfs, ext4, apfs.
|
||||
|
||||
Borg now uses the ``borgstore`` package to implement the key/value store it
|
||||
uses for the repository.
|
||||
|
||||
It currently uses the ``file:`` Store (posixfs backend) either with a local
|
||||
directory or via ssh and a remote ``borg serve`` agent using borgstore on the
|
||||
It currently uses the ``file:`` store (posixfs backend) either with a local
|
||||
directory or via SSH and a remote ``borg serve`` agent using borgstore on the
|
||||
remote side.
|
||||
|
||||
This means that it will store each chunk into a separate filesystem file
|
||||
(for more details, see the ``borgstore`` project).
|
||||
|
||||
This has some pros and cons (compared to legacy borg 1.x's segment files):
|
||||
This has some pros and cons (compared to legacy Borg 1.x segment files):
|
||||
|
||||
Pros:
|
||||
|
||||
- Simplicity and better maintainability of the borg code.
|
||||
- Sometimes faster, less I/O, better scalability: e.g. borg compact can just
|
||||
- Simplicity and better maintainability of the Borg code.
|
||||
- Sometimes faster, less I/O, better scalability: e.g., borg compact can just
|
||||
remove unused chunks by deleting a single file and does not need to read
|
||||
and re-write segment files to free space.
|
||||
- In future, easier to adapt to other kinds of storage:
|
||||
and rewrite segment files to free space.
|
||||
- In the future, easier to adapt to other kinds of storage:
|
||||
borgstore's backends are quite simple to implement.
|
||||
``sftp:`` and ``rclone:`` backends already exist, others might be easy to add.
|
||||
- Parallel repository access with less locking is easier to implement.
|
||||
|
||||
Cons:
|
||||
|
||||
- The repository filesystem will have to deal with a big amount of files (there
|
||||
- The repository filesystem will have to deal with a large number of files (there
|
||||
are provisions in borgstore against having too many files in a single directory
|
||||
by using a nested directory structure).
|
||||
- Bigger fs space usage overhead (will depend on allocation block size - modern
|
||||
filesystems like zfs are rather clever here using a variable block size).
|
||||
- Sometimes slower, due to less sequential / more random access operations.
|
||||
- Greater filesystem space overhead (depends on the allocation block size — modern
|
||||
filesystems like zfs are rather clever here, using a variable block size).
|
||||
- Sometimes slower, due to less sequential and more random access operations.
|
||||
|
|
|
|||
|
|
@ -1,10 +1,9 @@
|
|||
Logging
|
||||
~~~~~~~
|
||||
|
||||
Borg writes all log output to stderr by default. But please note that something
|
||||
showing up on stderr does *not* indicate an error condition just because it is
|
||||
on stderr. Please check the log levels of the messages and the return code of
|
||||
borg for determining error, warning or success conditions.
|
||||
Borg writes all log output to stderr by default. However, output on stderr does
|
||||
not necessarily indicate an error. Check the log levels of the messages and the
|
||||
return code of borg to determine error, warning, or success conditions.
|
||||
|
||||
If you want to capture the log output to a file, just redirect it:
|
||||
|
||||
|
|
@ -15,30 +14,30 @@ If you want to capture the log output to a file, just redirect it:
|
|||
|
||||
Custom logging configurations can be implemented via BORG_LOGGING_CONF.
|
||||
|
||||
The log level of the builtin logging configuration defaults to WARNING.
|
||||
The log level of the built-in logging configuration defaults to WARNING.
|
||||
This is because we want Borg to be mostly silent and only output
|
||||
warnings, errors and critical messages, unless output has been requested
|
||||
by supplying an option that implies output (e.g. ``--list`` or ``--progress``).
|
||||
warnings, errors, and critical messages unless output has been requested
|
||||
by supplying an option that implies output (e.g., ``--list`` or ``--progress``).
|
||||
|
||||
Log levels: DEBUG < INFO < WARNING < ERROR < CRITICAL
|
||||
|
||||
Use ``--debug`` to set DEBUG log level -
|
||||
to get debug, info, warning, error and critical level output.
|
||||
Use ``--debug`` to set the DEBUG log level —
|
||||
this prints debug, info, warning, error, and critical messages.
|
||||
|
||||
Use ``--info`` (or ``-v`` or ``--verbose``) to set INFO log level -
|
||||
to get info, warning, error and critical level output.
|
||||
Use ``--info`` (or ``-v`` or ``--verbose``) to set the INFO log level —
|
||||
this prints info, warning, error, and critical messages.
|
||||
|
||||
Use ``--warning`` (default) to set WARNING log level -
|
||||
to get warning, error and critical level output.
|
||||
Use ``--warning`` (default) to set the WARNING log level —
|
||||
this prints warning, error, and critical messages.
|
||||
|
||||
Use ``--error`` to set ERROR log level -
|
||||
to get error and critical level output.
|
||||
Use ``--error`` to set the ERROR log level —
|
||||
this prints error and critical messages.
|
||||
|
||||
Use ``--critical`` to set CRITICAL log level -
|
||||
to get critical level output.
|
||||
Use ``--critical`` to set the CRITICAL log level —
|
||||
this prints only critical messages.
|
||||
|
||||
While you can set misc. log levels, do not expect that every command will
|
||||
give different output on different log levels - it's just a possibility.
|
||||
While you can set miscellaneous log levels, do not expect every command to
|
||||
produce different output at different log levels — it's merely a possibility.
|
||||
|
||||
.. warning:: Options ``--critical`` and ``--error`` are provided for completeness,
|
||||
their usage is not recommended as you might miss important information.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ Positional Arguments and Options: Order matters
|
|||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Borg only supports taking options (``-s`` and ``--progress`` in the example)
|
||||
to the left or right of all positional arguments (``repo::archive`` and ``path``
|
||||
either to the left or to the right of all positional arguments (``repo::archive`` and ``path``
|
||||
in the example), but not in between them:
|
||||
|
||||
::
|
||||
|
|
|
|||
|
|
@ -1,14 +1,14 @@
|
|||
Repository Locations / Archive names
|
||||
Repository Locations / Archive Names
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Many commands need to know the repository location, give it via ``-r`` / ``--repo``
|
||||
Many commands need to know the repository location; specify it via ``-r``/``--repo``
|
||||
or use the ``BORG_REPO`` environment variable.
|
||||
|
||||
Commands needing one or two archive names usually get them as positional argument.
|
||||
Commands that need one or two archive names usually take them as positional arguments.
|
||||
|
||||
Commands working with an arbitrary amount of archives, usually take ``-a ARCH_GLOB``.
|
||||
Commands that work with an arbitrary number of archives usually accept ``-a ARCH_GLOB``.
|
||||
|
||||
Archive names must not contain the ``/`` (slash) character. For simplicity,
|
||||
maybe also avoid blanks or other characters that have special meaning on the
|
||||
shell or in a filesystem (borg mount will use the archive name as directory
|
||||
also avoid spaces or other characters that have special meaning to the
|
||||
shell or in a filesystem (Borg mount uses the archive name as a directory
|
||||
name).
|
||||
|
|
|
|||
|
|
@ -3,48 +3,48 @@ Repository URLs
|
|||
|
||||
**Local filesystem** (or locally mounted network filesystem):
|
||||
|
||||
``/path/to/repo`` - filesystem path to repo directory, absolute path
|
||||
``/path/to/repo`` — filesystem path to the repository directory (absolute path)
|
||||
|
||||
``path/to/repo`` - filesystem path to repo directory, relative path
|
||||
``path/to/repo`` — filesystem path to the repository directory (relative path)
|
||||
|
||||
Also, stuff like ``~/path/to/repo`` or ``~other/path/to/repo`` works (this is
|
||||
Also, paths like ``~/path/to/repo`` or ``~other/path/to/repo`` work (this is
|
||||
expanded by your shell).
|
||||
|
||||
Note: you may also prepend a ``file://`` to a filesystem path to get URL style.
|
||||
Note: You may also prepend ``file://`` to a filesystem path to use URL style.
|
||||
|
||||
**Remote repositories** accessed via ssh user@host:
|
||||
**Remote repositories** accessed via SSH user@host:
|
||||
|
||||
``ssh://user@host:port//abs/path/to/repo`` - absolute path
|
||||
``ssh://user@host:port//abs/path/to/repo`` — absolute path
|
||||
|
||||
``ssh://user@host:port/rel/path/to/repo`` - path relative to current directory
|
||||
``ssh://user@host:port/rel/path/to/repo`` — path relative to the current directory
|
||||
|
||||
**Remote repositories** accessed via sftp:
|
||||
**Remote repositories** accessed via SFTP:
|
||||
|
||||
``sftp://user@host:port//abs/path/to/repo`` - absolute path
|
||||
``sftp://user@host:port//abs/path/to/repo`` — absolute path
|
||||
|
||||
``sftp://user@host:port/rel/path/to/repo`` - path relative to current directory
|
||||
``sftp://user@host:port/rel/path/to/repo`` — path relative to the current directory
|
||||
|
||||
For ssh and sftp URLs, the ``user@`` and ``:port`` parts are optional.
|
||||
For SSH and SFTP URLs, the ``user@`` and ``:port`` parts are optional.
|
||||
|
||||
**Remote repositories** accessed via rclone:
|
||||
|
||||
``rclone:remote:path`` - see the rclone docs for more details about remote:path.
|
||||
``rclone:remote:path`` — see the rclone docs for more details about ``remote:path``.
|
||||
|
||||
**Remote repositories** accessed via S3:
|
||||
|
||||
``(s3|b2):[profile|(access_key_id:access_key_secret)@][schema://hostname[:port]]/bucket/path`` - see the boto3 docs for more details about the credentials.
|
||||
``(s3|b2):[profile|(access_key_id:access_key_secret)@][schema://hostname[:port]]/bucket/path`` — see the boto3 docs for more details about credentials.
|
||||
|
||||
If you're connecting to AWS S3, ``[schema://hostname[:port]]`` is optional, but ``bucket`` and ``path`` are always required.
|
||||
If you are connecting to AWS S3, ``[schema://hostname[:port]]`` is optional, but ``bucket`` and ``path`` are always required.
|
||||
|
||||
Note: There is a known issue with some S3-compatible services, e.g., Backblaze B2. If you encounter problems, try using ``b2:`` instead of ``s3:`` in the url.
|
||||
Note: There is a known issue with some S3-compatible services, e.g., Backblaze B2. If you encounter problems, try using ``b2:`` instead of ``s3:`` in the URL.
|
||||
|
||||
|
||||
If you frequently need the same repo URL, it is a good idea to set the
|
||||
``BORG_REPO`` environment variable to set a default for the repo URL:
|
||||
If you frequently need the same repository URL, it is a good idea to set the
|
||||
``BORG_REPO`` environment variable to set a default repository URL:
|
||||
|
||||
::
|
||||
|
||||
export BORG_REPO='ssh://user@host:port/rel/path/to/repo'
|
||||
|
||||
Then just leave away the ``--repo`` option if you want
|
||||
to use the default - it will be read from BORG_REPO then.
|
||||
Then simply omit the ``--repo`` option when you want
|
||||
to use the default — it will be read from BORG_REPO.
|
||||
|
|
|
|||
|
|
@ -1,31 +1,31 @@
|
|||
Resource Usage
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Borg might use a lot of resources depending on the size of the data set it is dealing with.
|
||||
Borg might use significant resources depending on the size of the data set it is dealing with.
|
||||
|
||||
If one uses Borg in a client/server way (with a ssh: repository),
|
||||
the resource usage occurs in part on the client and in another part on the
|
||||
If you use Borg in a client/server way (with an SSH repository),
|
||||
the resource usage occurs partly on the client and partly on the
|
||||
server.
|
||||
|
||||
If one uses Borg as a single process (with a filesystem repo),
|
||||
all the resource usage occurs in that one process, so just add up client +
|
||||
If you use Borg as a single process (with a filesystem repository),
|
||||
all resource usage occurs in that one process, so add up client and
|
||||
server to get the approximate resource usage.
|
||||
|
||||
CPU client:
|
||||
- **borg create:** does chunking, hashing, compression, crypto (high CPU usage)
|
||||
- **chunks cache sync:** quite heavy on CPU, doing lots of hashtable operations.
|
||||
- **borg extract:** crypto, decompression (medium to high CPU usage)
|
||||
- **borg check:** similar to extract, but depends on options given.
|
||||
- **borg prune / borg delete archive:** low to medium CPU usage
|
||||
- **borg create:** chunking, hashing, compression, encryption (high CPU usage)
|
||||
- **chunks cache sync:** quite heavy on CPU, doing lots of hash table operations
|
||||
- **borg extract:** decryption, decompression (medium to high CPU usage)
|
||||
- **borg check:** similar to extract, but depends on options given
|
||||
- **borg prune/borg delete archive:** low to medium CPU usage
|
||||
- **borg delete repo:** done on the server
|
||||
|
||||
It won't go beyond 100% of 1 core as the code is currently single-threaded.
|
||||
It will not use more than 100% of one CPU core as the code is currently single-threaded.
|
||||
Especially higher zlib and lzma compression levels use significant amounts
|
||||
of CPU cycles. Crypto might be cheap on the CPU (if hardware accelerated) or
|
||||
of CPU cycles. Crypto might be cheap on the CPU (if hardware-accelerated) or
|
||||
expensive (if not).
|
||||
|
||||
CPU server:
|
||||
It usually doesn't need much CPU, it just deals with the key/value store
|
||||
It usually does not need much CPU; it just deals with the key/value store
|
||||
(repository) and uses the repository index for that.
|
||||
|
||||
borg check: the repository check computes the checksums of all chunks
|
||||
|
|
@ -33,15 +33,15 @@ CPU server:
|
|||
borg delete repo: low CPU usage
|
||||
|
||||
CPU (only for client/server operation):
|
||||
When using borg in a client/server way with a ssh:-type repo, the ssh
|
||||
When using Borg in a client/server way with an ssh-type repository, the SSH
|
||||
processes used for the transport layer will need some CPU on the client and
|
||||
on the server due to the crypto they are doing - esp. if you are pumping
|
||||
big amounts of data.
|
||||
on the server due to the crypto they are doing—especially if you are pumping
|
||||
large amounts of data.
|
||||
|
||||
Memory (RAM) client:
|
||||
The chunks index and the files index are read into memory for performance
|
||||
reasons. Might need big amounts of memory (see below).
|
||||
Compression, esp. lzma compression with high levels might need substantial
|
||||
reasons. Might need large amounts of memory (see below).
|
||||
Compression, especially lzma compression with high levels, might need substantial
|
||||
amounts of memory.
|
||||
|
||||
Memory (RAM) server:
|
||||
|
|
@ -49,47 +49,47 @@ Memory (RAM) server:
|
|||
considerable amounts of memory, but less than on the client (see below).
|
||||
|
||||
Chunks index (client only):
|
||||
Proportional to the amount of data chunks in your repo. Lots of chunks
|
||||
Proportional to the number of data chunks in your repo. Lots of chunks
|
||||
in your repo imply a big chunks index.
|
||||
It is possible to tweak the chunker params (see create options).
|
||||
It is possible to tweak the chunker parameters (see create options).
|
||||
|
||||
Files index (client only):
|
||||
Proportional to the amount of files in your last backups. Can be switched
|
||||
off (see create options), but next backup might be much slower if you do.
|
||||
Proportional to the number of files in your last backups. Can be switched
|
||||
off (see create options), but the next backup might be much slower if you do.
|
||||
The speed benefit of using the files cache is proportional to file size.
|
||||
|
||||
Repository index (server only):
|
||||
Proportional to the amount of data chunks in your repo. Lots of chunks
|
||||
Proportional to the number of data chunks in your repo. Lots of chunks
|
||||
in your repo imply a big repository index.
|
||||
It is possible to tweak the chunker params (see create options) to
|
||||
influence the amount of chunks being created.
|
||||
It is possible to tweak the chunker parameters (see create options) to
|
||||
influence the number of chunks created.
|
||||
|
||||
Temporary files (client):
|
||||
Reading data and metadata from a FUSE mounted repository will consume up to
|
||||
Reading data and metadata from a FUSE-mounted repository will consume up to
|
||||
the size of all deduplicated, small chunks in the repository. Big chunks
|
||||
won't be locally cached.
|
||||
will not be locally cached.
|
||||
|
||||
Temporary files (server):
|
||||
A non-trivial amount of data will be stored on the remote temp directory
|
||||
A non-trivial amount of data will be stored in the remote temporary directory
|
||||
for each client that connects to it. For some remotes, this can fill the
|
||||
default temporary directory at /tmp. This can be remediated by ensuring the
|
||||
default temporary directory in /tmp. This can be mitigated by ensuring the
|
||||
$TMPDIR, $TEMP, or $TMP environment variable is properly set for the sshd
|
||||
process.
|
||||
For some OSes, this can be done just by setting the correct value in the
|
||||
.bashrc (or equivalent login config file for other shells), however in
|
||||
For some OSes, this can be done by setting the correct value in the
|
||||
.bashrc (or equivalent login config file for other shells); however, in
|
||||
other cases it may be necessary to first enable ``PermitUserEnvironment yes``
|
||||
in your ``sshd_config`` file, then add ``environment="TMPDIR=/my/big/tmpdir"``
|
||||
at the start of the public key to be used in the ``authorized_hosts`` file.
|
||||
at the start of the public key to be used in the ``authorized_keys`` file.
|
||||
|
||||
Cache files (client only):
|
||||
Contains the chunks index and files index (plus a collection of single-
|
||||
archive chunk indexes which might need huge amounts of disk space,
|
||||
depending on archive count and size - see FAQ about how to reduce).
|
||||
archive chunk indexes), which might need huge amounts of disk space
|
||||
depending on archive count and size — see the FAQ for how to reduce this.
|
||||
|
||||
Network (only for client/server operation):
|
||||
If your repository is remote, all deduplicated (and optionally compressed/
|
||||
encrypted) data of course has to go over the connection (``ssh://`` repo url).
|
||||
If you use a locally mounted network filesystem, additionally some copy
|
||||
encrypted) data has to go over the connection (``ssh://`` repository URL).
|
||||
If you use a locally mounted network filesystem, some additional copy
|
||||
operations used for transaction support also go over the connection. If
|
||||
you back up multiple sources to one target repository, additional traffic
|
||||
happens for cache resynchronization.
|
||||
|
|
|
|||
|
|
@ -7,10 +7,10 @@ Borg can exit with the following return codes (rc):
|
|||
Return code Meaning
|
||||
=========== =======
|
||||
0 success (logged as INFO)
|
||||
1 generic warning (operation reached its normal end, but there were warnings --
|
||||
you should check the log, logged as WARNING)
|
||||
2 generic error (like a fatal error, a local or remote exception, the operation
|
||||
did not reach its normal end, logged as ERROR)
|
||||
1 generic warning (operation reached its normal end, but there were warnings —
|
||||
you should check the log; logged as WARNING)
|
||||
2 generic error (like a fatal error or a local/remote exception; the operation
|
||||
did not reach its normal end; logged as ERROR)
|
||||
3..99 specific error (enabled by BORG_EXIT_CODES=modern)
|
||||
100..127 specific warning (enabled by BORG_EXIT_CODES=modern)
|
||||
128+N killed by signal N (e.g. 137 == kill -9)
|
||||
|
|
@ -19,4 +19,4 @@ Return code Meaning
|
|||
If you use ``--show-rc``, the return code is also logged at the indicated
|
||||
level as the last log entry.
|
||||
|
||||
The modern exit codes (return codes, "rc") are documented there: :ref:`msgid`
|
||||
The modern exit codes (return codes, "rc") are documented here: see :ref:`msgid`.
|
||||
|
|
|
|||
|
|
@ -43,7 +43,7 @@ borgfs
|
|||
$ echo '/mnt/backup /tmp/myrepo fuse.borgfs defaults,noauto 0 0' >> /etc/fstab
|
||||
$ mount /tmp/myrepo
|
||||
$ ls /tmp/myrepo
|
||||
root-2016-02-01 root-2016-02-2015
|
||||
root-2016-02-01 root-2016-02-15
|
||||
|
||||
.. Note::
|
||||
|
||||
|
|
|
|||
|
|
@ -1,41 +1,41 @@
|
|||
Additional Notes
|
||||
----------------
|
||||
|
||||
Here are misc. notes about topics that are maybe not covered in enough detail in the usage section.
|
||||
Here are miscellaneous notes about topics that may not be covered in enough detail in the usage section.
|
||||
|
||||
.. _chunker-params:
|
||||
|
||||
``--chunker-params``
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The chunker params influence how input files are cut into pieces (chunks)
|
||||
The chunker parameters influence how input files are cut into pieces (chunks)
|
||||
which are then considered for deduplication. They also have a big impact on
|
||||
resource usage (RAM and disk space) as the amount of resources needed is
|
||||
(also) determined by the total amount of chunks in the repository (see
|
||||
(also) determined by the total number of chunks in the repository (see
|
||||
:ref:`cache-memory-usage` for details).
|
||||
|
||||
``--chunker-params=buzhash,10,23,16,4095`` results in a fine-grained deduplication|
|
||||
and creates a big amount of chunks and thus uses a lot of resources to manage
|
||||
and creates a large number of chunks and thus uses a lot of resources to manage
|
||||
them. This is good for relatively small data volumes and if the machine has a
|
||||
good amount of free RAM and disk space.
|
||||
|
||||
``--chunker-params=buzhash,19,23,21,4095`` (default) results in a coarse-grained
|
||||
deduplication and creates a much smaller amount of chunks and thus uses less
|
||||
deduplication and creates a much smaller number of chunks and thus uses less
|
||||
resources. This is good for relatively big data volumes and if the machine has
|
||||
a relatively low amount of free RAM and disk space.
|
||||
|
||||
``--chunker-params=fixed,4194304`` results in fixed 4MiB sized block
|
||||
deduplication and is more efficient than the previous example when used for
|
||||
``--chunker-params=fixed,4194304`` results in fixed 4 MiB-sized block
|
||||
deduplication and is more efficient than the previous example when used with
|
||||
for block devices (like disks, partitions, LVM LVs) or raw disk image files.
|
||||
|
||||
``--chunker-params=fixed,4096,512`` results in fixed 4kiB sized blocks,
|
||||
``--chunker-params=fixed,4096,512`` results in fixed 4 KiB-sized blocks,
|
||||
but the first header block will only be 512B long. This might be useful to
|
||||
dedup files with 1 header + N fixed size data blocks. Be careful not to
|
||||
produce a too big amount of chunks (like using small block size for huge
|
||||
produce too many chunks (for example, using a small block size for huge
|
||||
files).
|
||||
|
||||
If you already have made some archives in a repository and you then change
|
||||
chunker params, this of course impacts deduplication as the chunks will be
|
||||
If you have already created some archives in a repository and then change
|
||||
chunker parameters, this of course impacts deduplication as the chunks will be
|
||||
cut differently.
|
||||
|
||||
In the worst case (all files are big and were touched in between backups), this
|
||||
|
|
@ -45,17 +45,17 @@ Usually, it is not that bad though:
|
|||
|
||||
- usually most files are not touched, so it will just re-use the old chunks
|
||||
it already has in the repo
|
||||
- files smaller than the (both old and new) minimum chunksize result in only
|
||||
one chunk anyway, so the resulting chunks are same and deduplication will apply
|
||||
- files smaller than the (both old and new) minimum chunk size result in only
|
||||
one chunk anyway, so the resulting chunks are the same and deduplication will apply
|
||||
|
||||
If you switch chunker params to save resources for an existing repo that
|
||||
If you switch chunker parameters to save resources for an existing repository that
|
||||
already has some backup archives, you will see an increasing effect over time,
|
||||
when more and more files have been touched and stored again using the bigger
|
||||
chunksize **and** all references to the smaller older chunks have been removed
|
||||
chunk size **and** all references to the smaller, older chunks have been removed
|
||||
(by deleting / pruning archives).
|
||||
|
||||
If you want to see an immediate big effect on resource usage, you better start
|
||||
a new repository when changing chunker params.
|
||||
If you want to see an immediate, significant effect on resource usage, you should start
|
||||
a new repository when changing chunker parameters.
|
||||
|
||||
For more details, see :ref:`chunker_details`.
|
||||
|
||||
|
|
@ -63,19 +63,19 @@ For more details, see :ref:`chunker_details`.
|
|||
``--noatime / --noctime``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can use these ``borg create`` options not to store the respective timestamp
|
||||
You can use these ``borg create`` options to not store the respective timestamp
|
||||
into the archive, in case you do not really need it.
|
||||
|
||||
Besides saving a little space for the not archived timestamp, it might also
|
||||
Besides saving a little space by omitting the timestamp, it might also
|
||||
affect metadata stream deduplication: if only this timestamp changes between
|
||||
backups and is stored into the metadata stream, the metadata stream chunks
|
||||
won't deduplicate just because of that.
|
||||
will not deduplicate just because of that.
|
||||
|
||||
``--nobsdflags / --noflags``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can use this not to query and store (or not extract and set) flags - in case
|
||||
you don't need them or if they are broken somehow for your fs.
|
||||
You can use this to avoid querying and storing (or extracting and setting) flags — in case
|
||||
you don't need them or if they are broken for your filesystem.
|
||||
|
||||
On Linux, dealing with the flags needs some additional syscalls. Especially when
|
||||
dealing with lots of small files, this causes a noticeable overhead, so you can
|
||||
|
|
|
|||
|
|
@ -3,25 +3,25 @@
|
|||
Examples
|
||||
~~~~~~~~
|
||||
|
||||
Be careful, prune is a potentially dangerous command, it will remove backup
|
||||
Be careful: prune is a potentially dangerous command that removes backup
|
||||
archives.
|
||||
|
||||
The default of prune is to apply to **all archives in the repository** unless
|
||||
you restrict its operation to a subset of the archives.
|
||||
By default, prune applies to **all archives in the repository** unless you
|
||||
restrict its operation to a subset of the archives.
|
||||
|
||||
The recommended way to name archives (with ``borg create``) is to use the
|
||||
identical archive name within a series of archives. Then you can simply give
|
||||
that name to prune also, so it operates just on that series of archives.
|
||||
that name to prune as well, so it operates only on that series of archives.
|
||||
|
||||
Alternatively, you can use ``-a`` / ``--match-archives`` to do a match on the
|
||||
archive names to select some of them.
|
||||
When using ``-a``, be careful to choose a good pattern - e.g. do not use a
|
||||
Alternatively, you can use ``-a``/``--match-archives`` to match archive names
|
||||
and select a subset of them.
|
||||
When using ``-a``, be careful to choose a good pattern — for example, do not use a
|
||||
prefix "foo" if you do not also want to match "foobar".
|
||||
|
||||
It is strongly recommended to always run ``prune -v --list --dry-run ...``
|
||||
first so you will see what it would do without it actually doing anything.
|
||||
first, so you will see what it would do without it actually doing anything.
|
||||
|
||||
Don't forget to run ``borg compact -v`` after prune to actually free disk space.
|
||||
Do not forget to run ``borg compact -v`` after prune to actually free disk space.
|
||||
|
||||
::
|
||||
|
||||
|
|
@ -29,11 +29,11 @@ Don't forget to run ``borg compact -v`` after prune to actually free disk space.
|
|||
# Do a dry-run without actually deleting anything.
|
||||
$ borg prune -v --list --dry-run --keep-daily=7 --keep-weekly=4
|
||||
|
||||
# Similar as above but only apply to the archive series named '{hostname}':
|
||||
# Similar to the above, but only apply to the archive series named '{hostname}':
|
||||
$ borg prune -v --list --keep-daily=7 --keep-weekly=4 '{hostname}'
|
||||
|
||||
# Similar as above but apply to archive names starting with the hostname
|
||||
# of the machine followed by a "-" character:
|
||||
# Similar to the above, but apply to archive names starting with the hostname
|
||||
# of the machine followed by a '-' character:
|
||||
$ borg prune -v --list --keep-daily=7 --keep-weekly=4 -a 'sh:{hostname}-*'
|
||||
|
||||
# Keep 7 end of day, 4 additional end of week archives,
|
||||
|
|
|
|||
|
|
@ -4,17 +4,17 @@ Examples
|
|||
~~~~~~~~
|
||||
::
|
||||
|
||||
# Create a backup with little but fast compression
|
||||
# Create a backup with fast, low compression
|
||||
$ borg create archive /some/files --compression lz4
|
||||
# Then compress it - this might take longer, but the backup has already completed,
|
||||
# so no inconsistencies from a long-running backup job.
|
||||
# Then recompress it — this might take longer, but the backup has already completed,
|
||||
# so there are no inconsistencies from a long-running backup job.
|
||||
$ borg recreate -a archive --recompress --compression zlib,9
|
||||
|
||||
# Remove unwanted files from all archives in a repository.
|
||||
# Note the relative path for the --exclude option - archives only contain relative paths.
|
||||
# Note the relative path for the --exclude option — archives only contain relative paths.
|
||||
$ borg recreate --exclude home/icke/Pictures/drunk_photos
|
||||
|
||||
# Change archive comment
|
||||
# Change the archive comment
|
||||
$ borg create --comment "This is a comment" archivename ~
|
||||
$ borg info -a archivename
|
||||
Name: archivename
|
||||
|
|
|
|||
|
|
@ -5,8 +5,8 @@ Examples
|
|||
|
||||
::
|
||||
|
||||
# recompress repo contents
|
||||
# Recompress repository contents
|
||||
$ borg repo-compress --progress --compression=zstd,3
|
||||
|
||||
# recompress and obfuscate repo contents
|
||||
# Recompress and obfuscate repository contents
|
||||
$ borg repo-compress --progress --compression=obfuscate,1,zstd,3
|
||||
|
|
|
|||
|
|
@ -8,20 +8,20 @@ Examples
|
|||
|
||||
# Local repository
|
||||
$ export BORG_REPO=/path/to/repo
|
||||
# recommended repokey AEAD crypto modes
|
||||
# Recommended repokey AEAD cryptographic modes
|
||||
$ borg repo-create --encryption=repokey-aes-ocb
|
||||
$ borg repo-create --encryption=repokey-chacha20-poly1305
|
||||
$ borg repo-create --encryption=repokey-blake2-aes-ocb
|
||||
$ borg repo-create --encryption=repokey-blake2-chacha20-poly1305
|
||||
# no encryption, not recommended
|
||||
# No encryption (not recommended)
|
||||
$ borg repo-create --encryption=authenticated
|
||||
$ borg repo-create --encryption=authenticated-blake2
|
||||
$ borg repo-create --encryption=none
|
||||
|
||||
# Remote repository (accesses a remote borg via ssh)
|
||||
# Remote repository (accesses a remote Borg via SSH)
|
||||
$ export BORG_REPO=ssh://user@hostname/~/backup
|
||||
# repokey: stores the (encrypted) key into <REPO_DIR>/config
|
||||
# repokey: stores the encrypted key in <REPO_DIR>/config
|
||||
$ borg repo-create --encryption=repokey-aes-ocb
|
||||
# keyfile: stores the (encrypted) key into ~/.config/borg/keys/
|
||||
# keyfile: stores the encrypted key in ~/.config/borg/keys/
|
||||
$ borg repo-create --encryption=keyfile-aes-ocb
|
||||
|
||||
|
|
|
|||
|
|
@ -38,12 +38,12 @@ locations like ``/etc/environment`` or in the forced command itself (example bel
|
|||
.. note::
|
||||
The examples above use the ``restrict`` directive and assume a POSIX
|
||||
compliant shell set as the user's login shell.
|
||||
This does automatically block potential dangerous ssh features, even when
|
||||
This automatically blocks potentially dangerous SSH features, even when
|
||||
they are added in a future update. Thus, this option should be preferred.
|
||||
|
||||
If you're using openssh-server < 7.2, however, you have to specify explicitly
|
||||
the ssh features to restrict and cannot simply use the restrict option as it
|
||||
has been introduced in v7.2. We recommend to use
|
||||
If you are using OpenSSH server < 7.2, however, you must explicitly
|
||||
specify the SSH features to restrict and cannot simply use the ``restrict`` option, as it
|
||||
was introduced in v7.2. We recommend using
|
||||
``no-port-forwarding,no-X11-forwarding,no-pty,no-agent-forwarding,no-user-rc``
|
||||
in this case.
|
||||
|
||||
|
|
@ -56,9 +56,9 @@ SSH Configuration
|
|||
|
||||
``borg serve``'s pipes (``stdin``/``stdout``/``stderr``) are connected to the ``sshd`` process on the server side. In the event that the SSH connection between ``borg serve`` and the client is disconnected or stuck abnormally (for example, due to a network outage), it can take a long time for ``sshd`` to notice the client is disconnected. In the meantime, ``sshd`` continues running, and as a result so does the ``borg serve`` process holding the lock on the repository. This can cause subsequent ``borg`` operations on the remote repository to fail with the error: ``Failed to create/acquire the lock``.
|
||||
|
||||
In order to avoid this, it is recommended to perform the following additional SSH configuration:
|
||||
To avoid this, it is recommended to perform the following additional SSH configuration:
|
||||
|
||||
Either in the client side's ``~/.ssh/config`` file, or in the client's ``/etc/ssh/ssh_config`` file:
|
||||
Either in the client-side ``~/.ssh/config`` file or in the client's ``/etc/ssh/ssh_config`` file:
|
||||
::
|
||||
|
||||
Host backupserver
|
||||
|
|
@ -67,15 +67,15 @@ Either in the client side's ``~/.ssh/config`` file, or in the client's ``/etc/ss
|
|||
|
||||
Replacing ``backupserver`` with the hostname, FQDN or IP address of the borg server.
|
||||
|
||||
This will cause the client to send a keepalive to the server every 10 seconds. If 30 consecutive keepalives are sent without a response (a time of 300 seconds), the ssh client process will be terminated, causing the borg process to terminate gracefully.
|
||||
This will cause the client to send a keepalive to the server every 10 seconds. If 30 consecutive keepalives are sent without a response (a time of 300 seconds), the SSH client process will be terminated, causing the borg process to terminate gracefully.
|
||||
|
||||
On the server side's ``sshd`` configuration file (typically ``/etc/ssh/sshd_config``):
|
||||
In the server-side ``sshd`` configuration file (typically ``/etc/ssh/sshd_config``):
|
||||
::
|
||||
|
||||
ClientAliveInterval 10
|
||||
ClientAliveCountMax 30
|
||||
|
||||
This will cause the server to send a keep alive to the client every 10 seconds. If 30 consecutive keepalives are sent without a response (a time of 300 seconds), the server's sshd process will be terminated, causing the ``borg serve`` process to terminate gracefully and release the lock on the repository.
|
||||
This will cause the server to send a keepalive to the client every 10 seconds. If 30 consecutive keepalives are sent without a response (a time of 300 seconds), the server's sshd process will be terminated, causing the ``borg serve`` process to terminate gracefully and release the lock on the repository.
|
||||
|
||||
If you then run borg commands with ``--lock-wait 600``, this gives sufficient time for the borg serve processes to terminate after the SSH connection is torn down after the 300 second wait for the keepalives to fail.
|
||||
|
||||
|
|
@ -87,7 +87,7 @@ not be such a simple implementation)::
|
|||
|
||||
chsh -s /bin/sh BORGUSER
|
||||
|
||||
Because the configured shell is used by `openssh <https://www.openssh.com/>`_
|
||||
Because the configured shell is used by `OpenSSH <https://www.openssh.com/>`_
|
||||
to execute the command configured through the ``authorized_keys`` file
|
||||
using ``"$SHELL" -c "$COMMAND"``,
|
||||
setting a minimal shell implementation reduces the attack surface
|
||||
|
|
|
|||
|
|
@ -6,25 +6,25 @@ Examples
|
|||
~~~~~~~~
|
||||
::
|
||||
|
||||
# export as uncompressed tar
|
||||
# Export as an uncompressed tar archive
|
||||
$ borg export-tar Monday Monday.tar
|
||||
|
||||
# import an uncompressed tar
|
||||
# Import an uncompressed tar archive
|
||||
$ borg import-tar Monday Monday.tar
|
||||
|
||||
# exclude some file types, compress using gzip
|
||||
# Exclude some file types and compress using gzip
|
||||
$ borg export-tar Monday Monday.tar.gz --exclude '*.so'
|
||||
|
||||
# use higher compression level with gzip
|
||||
# Use a higher compression level with gzip
|
||||
$ borg export-tar --tar-filter="gzip -9" Monday Monday.tar.gz
|
||||
|
||||
# copy an archive from repoA to repoB
|
||||
# Copy an archive from repoA to repoB
|
||||
$ borg -r repoA export-tar --tar-format=BORG archive - | borg -r repoB import-tar archive -
|
||||
|
||||
# export a tar, but instead of storing it on disk, upload it to remote site using curl
|
||||
# Export a tar, but instead of storing it on disk, upload it to a remote site using curl
|
||||
$ borg export-tar Monday - | curl --data-binary @- https://somewhere/to/POST
|
||||
|
||||
# remote extraction via "tarpipe"
|
||||
# Remote extraction via 'tarpipe'
|
||||
$ borg export-tar Monday - | ssh somewhere "cd extracted; tar x"
|
||||
|
||||
Archives transfer script
|
||||
|
|
@ -46,7 +46,7 @@ Kept:
|
|||
|
||||
Lost:
|
||||
|
||||
- some archive metadata (like the original commandline, execution time, etc.)
|
||||
- some archive metadata (like the original command line, execution time, etc.)
|
||||
|
||||
Please note:
|
||||
|
||||
|
|
|
|||
|
|
@ -4,17 +4,17 @@ Examples
|
|||
~~~~~~~~
|
||||
::
|
||||
|
||||
# 0. Have borg 2.0 installed on client AND server, have a b12 repo copy for testing.
|
||||
# 0. Have Borg 2.0 installed on the client AND server; have a b12 repository copy for testing.
|
||||
|
||||
# 1. Create a new "related" repository:
|
||||
# here, the existing borg 1.2 repo used repokey-blake2 (and aes-ctr mode),
|
||||
# thus we use repokey-blake2-aes-ocb for the new borg 2.0 repo.
|
||||
# staying with the same chunk id algorithm (blake2) and with the same
|
||||
# Here, the existing Borg 1.2 repository used repokey-blake2 (and AES-CTR mode),
|
||||
# thus we use repokey-blake2-aes-ocb for the new Borg 2.0 repository.
|
||||
# Staying with the same chunk ID algorithm (BLAKE2) and with the same
|
||||
# key material (via --other-repo <oldrepo>) will make deduplication work
|
||||
# between old archives (copied with borg transfer) and future ones.
|
||||
# the AEAD cipher does not matter (everything must be re-encrypted and
|
||||
# re-authenticated anyway), you could also choose repokey-blake2-chacha20-poly1305.
|
||||
# in case your old borg repo did not use blake2, just remove the "-blake2".
|
||||
# The AEAD cipher does not matter (everything must be re-encrypted and
|
||||
# re-authenticated anyway); you could also choose repokey-blake2-chacha20-poly1305.
|
||||
# In case your old Borg repository did not use BLAKE2, just remove the "-blake2".
|
||||
$ borg --repo ssh://borg2@borgbackup/./tests/b20 repo-create \
|
||||
--other-repo ssh://borg2@borgbackup/./tests/b12 -e repokey-blake2-aes-ocb
|
||||
|
||||
|
|
@ -22,11 +22,11 @@ Examples
|
|||
$ borg --repo ssh://borg2@borgbackup/./tests/b20 transfer --upgrader=From12To20 \
|
||||
--other-repo ssh://borg2@borgbackup/./tests/b12 --dry-run
|
||||
|
||||
# 3. Transfer (copy) archives from old repo into new repo (takes time and space!):
|
||||
# 3. Transfer (copy) archives from the old repository into the new repository (takes time and space!):
|
||||
$ borg --repo ssh://borg2@borgbackup/./tests/b20 transfer --upgrader=From12To20 \
|
||||
--other-repo ssh://borg2@borgbackup/./tests/b12
|
||||
|
||||
# 4. Check if we have everything (same as 2.):
|
||||
# 4. Check whether we have everything (same as step 2):
|
||||
$ borg --repo ssh://borg2@borgbackup/./tests/b20 transfer --upgrader=From12To20 \
|
||||
--other-repo ssh://borg2@borgbackup/./tests/b12 --dry-run
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ authors = [{name="The Borg Collective (see AUTHORS file)"}]
|
|||
maintainers = [
|
||||
{name="Thomas Waldmann", email="tw@waldmann-edv.de"},
|
||||
]
|
||||
description = "Deduplicated, encrypted, authenticated and compressed backups"
|
||||
description = "Deduplicated, encrypted, authenticated, and compressed backups"
|
||||
requires-python = ">=3.10"
|
||||
keywords = ["backup", "borgbackup"]
|
||||
classifiers = [
|
||||
|
|
@ -34,7 +34,7 @@ dependencies = [
|
|||
"borgstore ~= 0.3.0",
|
||||
"msgpack >=1.0.3, <=1.1.1",
|
||||
"packaging",
|
||||
"platformdirs >=3.0.0, <5.0.0; sys_platform == 'darwin'", # for macOS: breaking changes in 3.0.0,
|
||||
"platformdirs >=3.0.0, <5.0.0; sys_platform == 'darwin'", # for macOS: breaking changes in 3.0.0.
|
||||
"platformdirs >=2.6.0, <5.0.0; sys_platform != 'darwin'", # for others: 2.6+ works consistently.
|
||||
"argon2-cffi",
|
||||
]
|
||||
|
|
@ -74,7 +74,7 @@ requires = ["setuptools>=77.0.0", "wheel", "pkgconfig", "Cython>=3.0.3", "setupt
|
|||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[tool.setuptools_scm]
|
||||
# make sure we have the same versioning scheme with all setuptools_scm versions, to avoid different autogenerated files
|
||||
# Make sure we have the same versioning scheme with all setuptools_scm versions, to avoid different autogenerated files.
|
||||
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1015052
|
||||
# https://github.com/borgbackup/borg/issues/6875
|
||||
write_to = "src/borg/_version.py"
|
||||
|
|
@ -99,10 +99,10 @@ select = ["E", "F"]
|
|||
# F811 redef of unused var
|
||||
|
||||
# borg code style guidelines:
|
||||
# Ignoring E203 due to https://github.com/PyCQA/pycodestyle/issues/373
|
||||
# Ignoring E203 due to https://github.com/PyCQA/pycodestyle/issues/373.
|
||||
ignore = ["E203", "F405", "E402"]
|
||||
|
||||
# Allow autofix for all enabled rules (when `--fix`) is provided.
|
||||
# Allow autofix for all enabled rules (when `--fix` is provided).
|
||||
fixable = ["A", "B", "C", "D", "E", "F", "G", "I", "N", "Q", "S", "T", "W", "ANN", "ARG", "BLE", "COM", "DJ", "DTZ", "EM", "ERA", "EXE", "FBT", "ICN", "INP", "ISC", "NPY", "PD", "PGH", "PIE", "PL", "PT", "PTH", "PYI", "RET", "RSE", "RUF", "SIM", "SLF", "TCH", "TID", "TRY", "UP", "YTT"]
|
||||
unfixable = []
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# -*- mode: python -*-
|
||||
# this pyinstaller spec file is used to build borg binaries on posix platforms and Windows
|
||||
# This PyInstaller spec file is used to build Borg binaries on POSIX platforms and Windows.
|
||||
|
||||
import os, sys
|
||||
|
||||
|
|
@ -53,10 +53,10 @@ exe = EXE(pyz,
|
|||
icon='NONE')
|
||||
|
||||
# Build a directory-based binary in addition to a packed
|
||||
# single file. This allows one to look at all included
|
||||
# files easily (e.g. without having to strace or halt the built binary
|
||||
# and introspect /tmp). Also avoids unpacking all libs when
|
||||
# running the app, which is better for app signing on various OS.
|
||||
# single-file binary. This allows you to look at all included
|
||||
# files easily (e.g., without having to strace or halt the built binary
|
||||
# and introspect /tmp). It also avoids unpacking all libraries when
|
||||
# running the app, which is better for application signing on various OSes.
|
||||
slim_exe = EXE(pyz,
|
||||
a.scripts,
|
||||
exclude_binaries=True,
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
#!/usr/bin/env python3
|
||||
# this script automatically generates the error list for the docs by
|
||||
# This script automatically generates the error list for the documentation by
|
||||
# looking at the "Error" class and its subclasses.
|
||||
|
||||
from textwrap import indent
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Check if all given binaries work with the given glibc version.
|
||||
Check whether all given binaries work with the specified glibc version.
|
||||
|
||||
glibc_check.py 2.11 bin [bin ...]
|
||||
Usage: glibc_check.py 2.11 BIN [BIN ...]
|
||||
|
||||
rc = 0 means "yes", rc = 1 means "no".
|
||||
Exit code 0 means "yes"; exit code 1 means "no".
|
||||
"""
|
||||
|
||||
import re
|
||||
|
|
@ -40,7 +40,7 @@ def main():
|
|||
print(f"{filename} {format_version(requires_glibc)}")
|
||||
except subprocess.CalledProcessError:
|
||||
if verbose:
|
||||
print("%s errored." % filename)
|
||||
print("%s failed." % filename)
|
||||
|
||||
wanted = max(overall_versions)
|
||||
ok = given >= wanted
|
||||
|
|
@ -51,7 +51,7 @@ def main():
|
|||
else:
|
||||
print(
|
||||
"The binaries do not work with the given glibc %s. "
|
||||
"Minimum is: %s" % (format_version(given), format_version(wanted))
|
||||
"Minimum required is %s." % (format_version(given), format_version(wanted))
|
||||
)
|
||||
return ok
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# this scripts uses borg 1.2 to generate test data for "borg transfer --upgrader=From12To20"
|
||||
# This script uses Borg 1.2 to generate test data for "borg transfer --upgrader=From12To20".
|
||||
BORG=./borg-1.2.2
|
||||
# on macOS, gnu tar is available as gtar
|
||||
# On macOS, GNU tar is available as gtar.
|
||||
TAR=gtar
|
||||
SRC=/tmp/borgtest
|
||||
ARCHIVE=`pwd`/src/borg/testsuite/archiver/repo12.tar.gz
|
||||
|
|
@ -55,7 +55,7 @@ pushd $SRC >/dev/null
|
|||
sudo mkdir root_stuff
|
||||
sudo mknod root_stuff/bdev_12_34 b 12 34
|
||||
sudo mknod root_stuff/cdev_34_56 c 34 56
|
||||
sudo touch root_stuff/strange_uid_gid # no user name, no group name for this uid/gid!
|
||||
sudo touch root_stuff/strange_uid_gid # No user name or group name exists for this UID/GID!
|
||||
sudo chown 54321:54321 root_stuff/strange_uid_gid
|
||||
|
||||
popd >/dev/null
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
# Completions for borg
|
||||
# Bash completions for Borg
|
||||
# https://www.borgbackup.org/
|
||||
# Note:
|
||||
# Listing archives works on password protected repositories only if $BORG_PASSPHRASE is set.
|
||||
# Listing archives works on password-protected repositories only if $BORG_PASSPHRASE is set.
|
||||
# Install:
|
||||
# Copy this file to /usr/share/bash-completion/completions/ or /etc/bash_completion.d/
|
||||
# Copy this file to /usr/share/bash-completion/completions/ or /etc/bash_completion.d/.
|
||||
|
||||
_borg()
|
||||
{
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
# Completions for borg
|
||||
# https://www.borgbackup.org/
|
||||
# Note:
|
||||
# Listing archives works on password protected repositories only if $BORG_PASSPHRASE is set.
|
||||
# Listing archives works on password-protected repositories only if $BORG_PASSPHRASE is set.
|
||||
# Install:
|
||||
# Copy this file to /usr/share/fish/vendor_completions.d/
|
||||
|
||||
|
|
@ -19,7 +19,7 @@ complete -c borg -f -n __fish_is_first_token -a 'prune' -d 'Prune repository arc
|
|||
complete -c borg -f -n __fish_is_first_token -a 'compact' -d 'Free repository space'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'info' -d 'Show archive details'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'mount' -d 'Mount archive or a repository'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'umount' -d 'Un-mount the mounted archive'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'umount' -d 'Unmount the mounted archive'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'repo-compress' -d 'Repository (re-)compression'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'repo-create' -d 'Create a new, empty repository'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'repo-delete' -d 'Delete a repository'
|
||||
|
|
@ -29,7 +29,7 @@ complete -c borg -f -n __fish_is_first_token -a 'repo-space' -d 'Manage reserved
|
|||
complete -c borg -f -n __fish_is_first_token -a 'tag' -d 'Tag archives'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'transfer' -d 'Transfer of archives from another repository'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'undelete' -d 'Undelete archives'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'version' -d 'Display borg client version / borg server version'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'version' -d 'Display Borg client version / Borg server version'
|
||||
|
||||
function __fish_borg_seen_key
|
||||
if __fish_seen_subcommand_from key
|
||||
|
|
@ -46,7 +46,7 @@ complete -c borg -f -n __fish_borg_seen_key -a 'change-passphrase' -d 'Change k
|
|||
complete -c borg -f -n __fish_is_first_token -a 'serve' -d 'Start in server mode'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'recreate' -d 'Recreate contents of existing archives'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'export-tar' -d 'Create tarball from an archive'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'with-lock' -d 'Run a command while repository lock held'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'with-lock' -d 'Run a command while the repository lock is held'
|
||||
complete -c borg -f -n __fish_is_first_token -a 'break-lock' -d 'Break the repository lock'
|
||||
|
||||
function __fish_borg_seen_benchmark
|
||||
|
|
@ -89,11 +89,11 @@ complete -c borg -f -l 'show-version' -d 'Log version information'
|
|||
complete -c borg -f -l 'show-rc' -d 'Log the return code'
|
||||
complete -c borg -f -l 'umask' -d 'Set umask to M [0077]'
|
||||
complete -c borg -l 'remote-path' -d 'Use PATH as remote borg executable'
|
||||
complete -c borg -f -l 'upload-ratelimit' -d 'Set network upload rate limit in kiByte/s'
|
||||
complete -c borg -f -l 'upload-ratelimit' -d 'Set network upload rate limit in KiB/s'
|
||||
complete -c borg -f -l 'upload-buffer' -d 'Set network upload buffer size in MiB'
|
||||
complete -c borg -l 'debug-profile' -d 'Write execution profile into FILE'
|
||||
complete -c borg -l 'rsh' -d 'Use COMMAND instead of ssh'
|
||||
complete -c borg -l 'socket' -d 'Use UNIX DOMAIN socket at PATH'
|
||||
complete -c borg -l 'socket' -d 'Use UNIX domain socket at PATH'
|
||||
|
||||
# borg analyze options
|
||||
complete -c borg -f -s a -l 'match-archives' -d 'Only archive names matching PATTERN' -n "__fish_seen_subcommand_from analyze"
|
||||
|
|
|
|||
|
|
@ -55,10 +55,10 @@ _borg_commands() {
|
|||
'serve:start in server mode'
|
||||
'tag:tag archives'
|
||||
'transfer:transfer of archives from another repository'
|
||||
'umount:un-mount the FUSE filesystem'
|
||||
'umount:unmount the FUSE filesystem'
|
||||
'undelete:undelete archive'
|
||||
'version:display borg client version / borg server version'
|
||||
'with-lock:run a user specified command with the repository lock held'
|
||||
'with-lock:run a user-specified command with the repository lock held'
|
||||
)
|
||||
_describe -t commands 'borg commands' commands_
|
||||
}
|
||||
|
|
|
|||
12
setup.py
12
setup.py
|
|
@ -30,7 +30,7 @@ sys.path += [os.path.dirname(__file__)]
|
|||
is_win32 = sys.platform.startswith("win32")
|
||||
is_openbsd = sys.platform.startswith("openbsd")
|
||||
|
||||
# Number of threads to use for cythonize, not used on windows
|
||||
# Number of threads to use for cythonize, not used on Windows
|
||||
cpu_threads = multiprocessing.cpu_count() if multiprocessing and multiprocessing.get_start_method() != "spawn" else None
|
||||
|
||||
# How the build process finds the system libs:
|
||||
|
|
@ -91,8 +91,8 @@ else:
|
|||
cython_c_files = [fn.replace(".pyx", ".c") for fn in cython_sources]
|
||||
if not on_rtd and not all(os.path.exists(path) for path in cython_c_files):
|
||||
raise ImportError(
|
||||
"The GIT version of Borg needs a working Cython. "
|
||||
+ "Install or fix Cython or use a released borg version. "
|
||||
"The Git version of Borg needs a working Cython. "
|
||||
+ "Install or fix Cython or use a released Borg version. "
|
||||
+ "Importing cythonize failed with: "
|
||||
+ cythonize_import_error_msg
|
||||
)
|
||||
|
|
@ -115,7 +115,7 @@ if not on_rtd:
|
|||
try:
|
||||
import pkgconfig as pc
|
||||
except ImportError:
|
||||
print("Warning: can not import pkgconfig python package.")
|
||||
print("Warning: cannot import pkgconfig Python package.")
|
||||
pc = None
|
||||
|
||||
def lib_ext_kwargs(pc, prefix_env_var, lib_name, lib_pkg_name, pc_version, lib_subdir="lib"):
|
||||
|
|
@ -139,7 +139,7 @@ if not on_rtd:
|
|||
if is_win32:
|
||||
crypto_ext_lib = lib_ext_kwargs(pc, "BORG_OPENSSL_PREFIX", "libcrypto", "libcrypto", ">=1.1.1", lib_subdir="")
|
||||
elif is_openbsd:
|
||||
# Use openssl (not libressl) because we need AES-OCB via EVP api. Link
|
||||
# Use OpenSSL (not LibreSSL) because we need AES-OCB via the EVP API. Link
|
||||
# it statically to avoid conflicting with shared libcrypto from the base
|
||||
# OS pulled in via dependencies.
|
||||
openssl_prefix = os.environ.get("BORG_OPENSSL_PREFIX", "/usr/local")
|
||||
|
|
@ -225,7 +225,7 @@ if not on_rtd:
|
|||
# we only set this to avoid the related FutureWarning from Cython3.
|
||||
cython_opts = dict(compiler_directives={"language_level": "3str"})
|
||||
if not is_win32:
|
||||
# compile .pyx extensions to .c in parallel, does not work on windows
|
||||
# Compile .pyx extensions to .c in parallel; does not work on Windows
|
||||
cython_opts["nthreads"] = cpu_threads
|
||||
|
||||
# generate C code from Cython for ALL supported platforms, so we have them in the sdist.
|
||||
|
|
|
|||
|
|
@ -11,10 +11,10 @@ __version_tuple__ = _v._version.release # type: ignore
|
|||
# and setuptools_scm determines a 0.1.dev... version
|
||||
assert all(isinstance(v, int) for v in __version_tuple__), (
|
||||
"""\
|
||||
broken borgbackup version metadata: %r
|
||||
Broken BorgBackup version metadata: %r
|
||||
|
||||
version metadata is obtained dynamically on installation via setuptools_scm,
|
||||
please ensure your git repo has the correct tags or you provide the version
|
||||
Version metadata is obtained dynamically during installation via setuptools_scm;
|
||||
please ensure your Git repository has the correct tags, or provide the version
|
||||
using SETUPTOOLS_SCM_PRETEND_VERSION in your build script.
|
||||
"""
|
||||
% __version__
|
||||
|
|
|
|||
|
|
@ -1,17 +1,17 @@
|
|||
import sys
|
||||
import os
|
||||
|
||||
# On windows loading the bundled libcrypto dll fails if the folder
|
||||
# containing the dll is not in the search path. The dll is shipped
|
||||
# with python in the "DLLs" folder, so let's add this folder
|
||||
# to the path. The folder is always in sys.path, get it from there.
|
||||
# On Windows, loading the bundled libcrypto DLL fails if the folder
|
||||
# containing the DLL is not in the search path. The DLL is shipped
|
||||
# with Python in the "DLLs" folder, so we add this folder
|
||||
# to the PATH. The folder is always present in sys.path; get it from there.
|
||||
if sys.platform.startswith("win32"):
|
||||
# Keep it an iterable to support multiple folder which contain "DLLs".
|
||||
# Keep it an iterable to support multiple folders that contain "DLLs".
|
||||
dll_path = (p for p in sys.path if "DLLs" in os.path.normpath(p).split(os.path.sep))
|
||||
os.environ["PATH"] = os.pathsep.join(dll_path) + os.pathsep + os.environ["PATH"]
|
||||
|
||||
|
||||
# note: absolute import from "borg", it seems pyinstaller binaries do not work without this.
|
||||
# Note: absolute import from "borg"; PyInstaller binaries do not work without this.
|
||||
from borg.archiver import main
|
||||
|
||||
main()
|
||||
|
|
|
|||
|
|
@ -767,11 +767,11 @@ Duration: {0.duration}
|
|||
:param sparse: write sparse files (chunk-granularity, independent of the original being sparse)
|
||||
:param hlm: maps hlid to link_target for extracting subtrees with hardlinks correctly
|
||||
:param pi: ProgressIndicatorPercent (or similar) for file extraction progress (in bytes)
|
||||
:param continue_extraction: continue a previously interrupted extraction of same archive
|
||||
:param continue_extraction: continue a previously interrupted extraction of the same archive
|
||||
"""
|
||||
|
||||
def same_item(item, st):
|
||||
"""is the archived item the same as the fs item at same path with stat st?"""
|
||||
"""Is the archived item the same as the filesystem item at the same path with stat st?"""
|
||||
if not stat.S_ISREG(st.st_mode):
|
||||
# we only "optimize" for regular files.
|
||||
# other file types are less frequent and have no content extraction we could "optimize away".
|
||||
|
|
|
|||
|
|
@ -382,7 +382,7 @@ class Archiver(
|
|||
return parser
|
||||
|
||||
def get_args(self, argv, cmd):
|
||||
"""usually, just returns argv, except if we deal with a ssh forced command for borg serve."""
|
||||
"""Usually just returns argv, except when dealing with an SSH forced command for borg serve."""
|
||||
result = self.parse_args(argv[1:])
|
||||
if cmd is not None and result.func == self.do_serve:
|
||||
# borg serve case:
|
||||
|
|
@ -457,7 +457,7 @@ class Archiver(
|
|||
selftest(logger)
|
||||
|
||||
def _setup_implied_logging(self, args):
|
||||
"""turn on INFO level logging for args that imply that they will produce output"""
|
||||
"""Turn on INFO-level logging for arguments that imply they will produce output."""
|
||||
# map of option name to name of logger for that option
|
||||
option_logger = {
|
||||
"show_version": "borg.output.show-version",
|
||||
|
|
@ -544,7 +544,7 @@ class Archiver(
|
|||
|
||||
|
||||
def sig_info_handler(sig_no, stack): # pragma: no cover
|
||||
"""search the stack for infos about the currently processed file and print them"""
|
||||
"""Search the stack for information about the currently processed file and print it."""
|
||||
with signal_handler(sig_no, signal.SIG_IGN):
|
||||
for frame in inspect.getouterframes(stack):
|
||||
func, loc = frame[3], frame[0].f_locals
|
||||
|
|
|
|||
|
|
@ -243,7 +243,7 @@ def with_archive(method):
|
|||
# e.g. through "borg ... --help", define a substitution for the reference here.
|
||||
# It will replace the entire :ref:`foo` verbatim.
|
||||
rst_plain_text_references = {
|
||||
"a_status_oddity": '"I am seeing ‘A’ (added) status for a unchanged file!?"',
|
||||
"a_status_oddity": '"I am seeing ‘A’ (added) status for an unchanged file!?"',
|
||||
"separate_compaction": '"Separate compaction"',
|
||||
"list_item_flags": '"Item flags"',
|
||||
"borg_patterns": '"borg help patterns"',
|
||||
|
|
@ -306,14 +306,14 @@ def define_exclude_and_patterns(add_option, *, tag_files=False, strip_components
|
|||
dest="exclude_if_present",
|
||||
action="append",
|
||||
type=str,
|
||||
help="exclude directories that are tagged by containing a filesystem object with " "the given NAME",
|
||||
help="exclude directories that are tagged by containing a filesystem object with the given NAME",
|
||||
)
|
||||
add_option(
|
||||
"--keep-exclude-tags",
|
||||
dest="keep_exclude_tags",
|
||||
action="store_true",
|
||||
help="if tag objects are specified with ``--exclude-if-present``, "
|
||||
"don't omit the tag objects themselves from the backup archive",
|
||||
"do not omit the tag objects themselves from the backup archive",
|
||||
)
|
||||
|
||||
if strip_components:
|
||||
|
|
@ -348,7 +348,7 @@ def define_archive_filters_group(
|
|||
metavar="PATTERN",
|
||||
dest="match_archives",
|
||||
action="append",
|
||||
help='only consider archives matching all patterns. see "borg help match-archives".',
|
||||
help='only consider archives matching all patterns. See "borg help match-archives".',
|
||||
)
|
||||
|
||||
if sort_by:
|
||||
|
|
@ -374,7 +374,7 @@ def define_archive_filters_group(
|
|||
type=positive_int_validator,
|
||||
default=0,
|
||||
action=Highlander,
|
||||
help="consider first N archives after other filters were applied",
|
||||
help="consider the first N archives after other filters are applied",
|
||||
)
|
||||
group.add_argument(
|
||||
"--last",
|
||||
|
|
@ -383,7 +383,7 @@ def define_archive_filters_group(
|
|||
type=positive_int_validator,
|
||||
default=0,
|
||||
action=Highlander,
|
||||
help="consider last N archives after other filters were applied",
|
||||
help="consider the last N archives after other filters are applied",
|
||||
)
|
||||
|
||||
if oldest_newest:
|
||||
|
|
@ -394,7 +394,7 @@ def define_archive_filters_group(
|
|||
dest="oldest",
|
||||
type=relative_time_marker_validator,
|
||||
action=Highlander,
|
||||
help="consider archives between the oldest archive's timestamp and (oldest + TIMESPAN), e.g. 7d or 12m.",
|
||||
help="consider archives between the oldest archive's timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.",
|
||||
)
|
||||
group.add_argument(
|
||||
"--newest",
|
||||
|
|
@ -402,7 +402,7 @@ def define_archive_filters_group(
|
|||
dest="newest",
|
||||
type=relative_time_marker_validator,
|
||||
action=Highlander,
|
||||
help="consider archives between the newest archive's timestamp and (newest - TIMESPAN), e.g. 7d or 12m.",
|
||||
help="consider archives between the newest archive's timestamp and (newest - TIMESPAN), e.g., 7d or 12m.",
|
||||
)
|
||||
|
||||
if older_newer:
|
||||
|
|
@ -413,7 +413,7 @@ def define_archive_filters_group(
|
|||
dest="older",
|
||||
type=relative_time_marker_validator,
|
||||
action=Highlander,
|
||||
help="consider archives older than (now - TIMESPAN), e.g. 7d or 12m.",
|
||||
help="consider archives older than (now - TIMESPAN), e.g., 7d or 12m.",
|
||||
)
|
||||
group.add_argument(
|
||||
"--newer",
|
||||
|
|
@ -421,7 +421,7 @@ def define_archive_filters_group(
|
|||
dest="newer",
|
||||
type=relative_time_marker_validator,
|
||||
action=Highlander,
|
||||
help="consider archives newer than (now - TIMESPAN), e.g. 7d or 12m.",
|
||||
help="consider archives newer than (now - TIMESPAN), e.g., 7d or 12m.",
|
||||
)
|
||||
|
||||
if deleted:
|
||||
|
|
|
|||
|
|
@ -99,7 +99,7 @@ class ArchiveAnalyzer:
|
|||
class AnalyzeMixIn:
|
||||
@with_repository(compatibility=(Manifest.Operation.READ,))
|
||||
def do_analyze(self, args, repository, manifest):
|
||||
"""Analyze archives"""
|
||||
"""Analyzes archives."""
|
||||
ArchiveAnalyzer(args, repository, manifest).analyze()
|
||||
|
||||
def build_parser_analyze(self, subparsers, common_parser, mid_common_parser):
|
||||
|
|
@ -109,21 +109,21 @@ class AnalyzeMixIn:
|
|||
"""
|
||||
Analyze archives to find "hot spots".
|
||||
|
||||
Borg analyze relies on the usual archive matching options to select the
|
||||
``borg analyze`` relies on the usual archive matching options to select the
|
||||
archives that should be considered for analysis (e.g. ``-a series_name``).
|
||||
Then it iterates over all matching archives, over all contained files and
|
||||
collects information about chunks stored in all directories it encountered.
|
||||
Then it iterates over all matching archives, over all contained files, and
|
||||
collects information about chunks stored in all directories it encounters.
|
||||
|
||||
It considers chunk IDs and their plaintext sizes (we don't have the compressed
|
||||
size in the repository easily available) and adds up added/removed chunks'
|
||||
sizes per direct parent directory and outputs a list of "directory: size".
|
||||
It considers chunk IDs and their plaintext sizes (we do not have the compressed
|
||||
size in the repository easily available) and adds up the sizes of added and removed
|
||||
chunks per direct parent directory, and outputs a list of "directory: size".
|
||||
|
||||
You can use that list to find directories with a lot of "activity" - maybe
|
||||
some of these are temporary or cache directories you did forget to exclude.
|
||||
You can use that list to find directories with a lot of "activity" — maybe
|
||||
some of these are temporary or cache directories you forgot to exclude.
|
||||
|
||||
To not have these unwanted directories in your backups, you could carefully
|
||||
exclude these in ``borg create`` (for future backups) or use ``borg recreate``
|
||||
to re-create existing archives without these.
|
||||
To avoid including these unwanted directories in your backups, you can carefully
|
||||
exclude them in ``borg create`` (for future backups) or use ``borg recreate``
|
||||
to recreate existing archives without them.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
|
|
|||
|
|
@ -125,7 +125,7 @@ class BenchmarkMixIn:
|
|||
print(fmt % ("D", msg, total_size_MB / dt_delete, count, file_size_formatted, content, dt_delete))
|
||||
|
||||
def do_benchmark_cpu(self, args):
|
||||
"""Benchmark CPU bound operations."""
|
||||
"""Benchmark CPU-bound operations."""
|
||||
from timeit import timeit
|
||||
|
||||
random_10M = os.urandom(10 * 1000 * 1000)
|
||||
|
|
@ -272,11 +272,11 @@ class BenchmarkMixIn:
|
|||
"""
|
||||
This command benchmarks borg CRUD (create, read, update, delete) operations.
|
||||
|
||||
It creates input data below the given PATH and backups this data into the given REPO.
|
||||
It creates input data below the given PATH and backs up this data into the given REPO.
|
||||
The REPO must already exist (it could be a fresh empty repo or an existing repo, the
|
||||
command will create / read / update / delete some archives named borg-benchmark-crud\\* there.
|
||||
|
||||
Make sure you have free space there, you'll need about 1GB each (+ overhead).
|
||||
Make sure you have free space there; you will need about 1 GB each (+ overhead).
|
||||
|
||||
If your repository is encrypted and borg needs a passphrase to unlock the key, use::
|
||||
|
||||
|
|
@ -316,15 +316,15 @@ class BenchmarkMixIn:
|
|||
description=self.do_benchmark_crud.__doc__,
|
||||
epilog=bench_crud_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="benchmarks borg CRUD (create, extract, update, delete).",
|
||||
help="benchmarks Borg CRUD (create, extract, update, delete).",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_benchmark_crud)
|
||||
|
||||
subparser.add_argument("path", metavar="PATH", help="path were to create benchmark input data")
|
||||
subparser.add_argument("path", metavar="PATH", help="path where to create benchmark input data")
|
||||
|
||||
bench_cpu_epilog = process_epilog(
|
||||
"""
|
||||
This command benchmarks misc. CPU bound borg operations.
|
||||
This command benchmarks miscellaneous CPU-bound Borg operations.
|
||||
|
||||
It creates input data in memory, runs the operation and then displays throughput.
|
||||
To reduce outside influence on the timings, please make sure to run this with:
|
||||
|
|
@ -340,6 +340,6 @@ class BenchmarkMixIn:
|
|||
description=self.do_benchmark_cpu.__doc__,
|
||||
epilog=bench_cpu_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="benchmarks borg CPU bound operations.",
|
||||
help="benchmarks Borg CPU-bound operations.",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_benchmark_cpu)
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ logger = create_logger()
|
|||
class CheckMixIn:
|
||||
@with_repository(exclusive=True, manifest=False)
|
||||
def do_check(self, args, repository):
|
||||
"""Check repository consistency"""
|
||||
"""Checks repository consistency."""
|
||||
if args.repair:
|
||||
msg = (
|
||||
"This is a potentially dangerous function.\n"
|
||||
|
|
@ -85,7 +85,7 @@ class CheckMixIn:
|
|||
the repository. The read data is checked by size and hash. Bit rot and other
|
||||
types of accidental damage can be detected this way. Running the repository
|
||||
check can be split into multiple partial checks using ``--max-duration``.
|
||||
When checking a ssh:// remote repository, please note that the checks run on
|
||||
When checking an ssh:// remote repository, please note that the checks run on
|
||||
the server and do not cause significant network traffic.
|
||||
|
||||
2. Checking consistency and correctness of the archive metadata and optionally
|
||||
|
|
@ -94,9 +94,9 @@ class CheckMixIn:
|
|||
all chunks referencing files (items) in the archive exist. This requires
|
||||
reading archive and file metadata, but not data. To scan for archives whose
|
||||
entries were lost from the archive directory, pass ``--find-lost-archives``.
|
||||
It requires reading all data and is hence very time consuming.
|
||||
It requires reading all data and is hence very time-consuming.
|
||||
To additionally cryptographically verify the file (content) data integrity,
|
||||
pass ``--verify-data``, which is even more time consuming.
|
||||
pass ``--verify-data``, which is even more time-consuming.
|
||||
|
||||
When checking archives of a remote repository, archive checks run on the client
|
||||
machine because they require decrypting data and therefore the encryption key.
|
||||
|
|
@ -106,7 +106,7 @@ class CheckMixIn:
|
|||
only.
|
||||
|
||||
The ``--max-duration`` option can be used to split a long-running repository
|
||||
check into multiple partial checks. After the given number of seconds the check
|
||||
check into multiple partial checks. After the given number of seconds, the check
|
||||
is interrupted. The next partial check will continue where the previous one
|
||||
stopped, until the full repository has been checked. Assuming a complete check
|
||||
would take 7 hours, then running a daily check with ``--max-duration=3600``
|
||||
|
|
@ -117,33 +117,33 @@ class CheckMixIn:
|
|||
``--max-duration`` you must also pass ``--repository-only``, and must not pass
|
||||
``--archives-only``, nor ``--repair``.
|
||||
|
||||
**Warning:** Please note that partial repository checks (i.e. running it with
|
||||
**Warning:** Please note that partial repository checks (i.e., running with
|
||||
``--max-duration``) can only perform non-cryptographic checksum checks on the
|
||||
repository files. Enabling partial repository checks excepts archive checks
|
||||
for the same reason. Therefore partial checks may be useful with very large
|
||||
repositories only where a full check would take too long.
|
||||
repository files. Enabling partial repository checks excludes archive checks
|
||||
for the same reason. Therefore, partial checks may be useful only with very large
|
||||
repositories where a full check would take too long.
|
||||
|
||||
The ``--verify-data`` option will perform a full integrity verification (as
|
||||
opposed to checking just the xxh64) of data, which means reading the
|
||||
data from the repository, decrypting and decompressing it. It is a complete
|
||||
cryptographic verification and hence very time consuming, but will detect any
|
||||
cryptographic verification and hence very time-consuming, but will detect any
|
||||
accidental and malicious corruption. Tamper-resistance is only guaranteed for
|
||||
encrypted repositories against attackers without access to the keys. You can
|
||||
not use ``--verify-data`` with ``--repository-only``.
|
||||
encrypted repositories against attackers without access to the keys. You cannot
|
||||
use ``--verify-data`` with ``--repository-only``.
|
||||
|
||||
The ``--find-lost-archives`` option will also scan the whole repository, but
|
||||
tells Borg to search for lost archive metadata. If Borg encounters any archive
|
||||
metadata that doesn't match with an archive directory entry (including
|
||||
metadata that does not match an archive directory entry (including
|
||||
soft-deleted archives), it means that an entry was lost.
|
||||
Unless ``borg compact`` is called, these archives can be fully restored with
|
||||
``--repair``. Please note that ``--find-lost-archives`` must read a lot of
|
||||
data from the repository and is thus very time consuming. You can not use
|
||||
data from the repository and is thus very time-consuming. You cannot use
|
||||
``--find-lost-archives`` with ``--repository-only``.
|
||||
|
||||
About repair mode
|
||||
+++++++++++++++++
|
||||
|
||||
The check command is a readonly task by default. If any corruption is found,
|
||||
The check command is a read-only task by default. If any corruption is found,
|
||||
Borg will report the issue and proceed with checking. To actually repair the
|
||||
issues found, pass ``--repair``.
|
||||
|
||||
|
|
@ -163,7 +163,7 @@ class CheckMixIn:
|
|||
in repair mode (i.e. running it with ``--repair``).
|
||||
|
||||
Repair mode will attempt to fix any corruptions found. Fixing corruptions does
|
||||
not mean recovering lost data: Borg can not magically restore data lost due to
|
||||
not mean recovering lost data: Borg cannot magically restore data lost due to
|
||||
e.g. a hardware failure. Repairing a repository means sacrificing some data
|
||||
for the sake of the repository as a whole and the remaining data. Hence it is,
|
||||
by definition, a potentially lossy task.
|
||||
|
|
@ -189,20 +189,20 @@ class CheckMixIn:
|
|||
description=self.do_check.__doc__,
|
||||
epilog=check_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="verify repository",
|
||||
help="verify the repository",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_check)
|
||||
subparser.add_argument(
|
||||
"--repository-only", dest="repo_only", action="store_true", help="only perform repository checks"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--archives-only", dest="archives_only", action="store_true", help="only perform archives checks"
|
||||
"--archives-only", dest="archives_only", action="store_true", help="only perform archive checks"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--verify-data",
|
||||
dest="verify_data",
|
||||
action="store_true",
|
||||
help="perform cryptographic archive data integrity verification " "(conflicts with ``--repository-only``)",
|
||||
help="perform cryptographic archive data integrity verification (conflicts with ``--repository-only``)",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--repair", dest="repair", action="store_true", help="attempt to repair any inconsistencies found"
|
||||
|
|
@ -217,6 +217,6 @@ class CheckMixIn:
|
|||
type=int,
|
||||
default=0,
|
||||
action=Highlander,
|
||||
help="do only a partial repo check for max. SECONDS seconds (Default: unlimited)",
|
||||
help="perform only a partial repository check for at most SECONDS seconds (default: unlimited)",
|
||||
)
|
||||
define_archive_filters_group(subparser)
|
||||
|
|
|
|||
|
|
@ -208,7 +208,7 @@ class ArchiveGarbageCollector:
|
|||
class CompactMixIn:
|
||||
@with_repository(exclusive=True, compatibility=(Manifest.Operation.DELETE,))
|
||||
def do_compact(self, args, repository, manifest):
|
||||
"""Collect garbage in repository"""
|
||||
"""Collects garbage in the repository."""
|
||||
if not args.dry_run: # support --dry-run to simplify scripting
|
||||
ArchiveGarbageCollector(repository, manifest, stats=args.stats, iec=args.iec).garbage_collect()
|
||||
|
||||
|
|
@ -219,44 +219,42 @@ class CompactMixIn:
|
|||
"""
|
||||
Free repository space by deleting unused chunks.
|
||||
|
||||
borg compact analyzes all existing archives to find out which repository
|
||||
``borg compact`` analyzes all existing archives to determine which repository
|
||||
objects are actually used (referenced). It then deletes all unused objects
|
||||
from the repository to free space.
|
||||
|
||||
Unused objects may result from:
|
||||
|
||||
- borg delete or prune usage
|
||||
- interrupted backups (maybe retry the backup first before running compact)
|
||||
- backup of source files that had an I/O error in the middle of their contents
|
||||
and that were skipped due to this
|
||||
- corruption of the repository (e.g. the archives directory having lost
|
||||
entries, see notes below)
|
||||
- use of ``borg delete`` or ``borg prune``
|
||||
- interrupted backups (consider retrying the backup before running compact)
|
||||
- backups of source files that encountered an I/O error mid-transfer and were skipped
|
||||
- corruption of the repository (e.g., the archives directory lost entries; see notes below)
|
||||
|
||||
You usually don't want to run ``borg compact`` after every write operation, but
|
||||
either regularly (e.g. once a month, possibly together with ``borg check``) or
|
||||
You usually do not want to run ``borg compact`` after every write operation, but
|
||||
either regularly (e.g., once a month, possibly together with ``borg check``) or
|
||||
when disk space needs to be freed.
|
||||
|
||||
**Important:**
|
||||
|
||||
After compacting it is no longer possible to use ``borg undelete`` to recover
|
||||
After compacting, it is no longer possible to use ``borg undelete`` to recover
|
||||
previously soft-deleted archives.
|
||||
|
||||
``borg compact`` might also delete data from archives that were "lost" due to
|
||||
archives directory corruption. Such archives could potentially be restored with
|
||||
``borg check --find-lost-archives [--repair]``, which is slow. You therefore
|
||||
might not want to do that unless there are signs of lost archives (e.g. when
|
||||
might not want to do that unless there are signs of lost archives (e.g., when
|
||||
seeing fatal errors when creating backups or when archives are missing in
|
||||
``borg repo-list``).
|
||||
|
||||
When giving the ``--stats`` option, borg will internally list all repository
|
||||
objects to determine their existence AND stored size. It will build a fresh
|
||||
When using the ``--stats`` option, borg will internally list all repository
|
||||
objects to determine their existence and stored size. It will build a fresh
|
||||
chunks index from that information and cache it in the repository. For some
|
||||
types of repositories, this might be very slow. It will tell you the sum of
|
||||
stored object sizes, before and after compaction.
|
||||
|
||||
Without ``--stats``, borg will rely on the cached chunks index to determine
|
||||
existing object IDs (but there is no stored size information in the index,
|
||||
thus it can't compute before/after compaction size statistics).
|
||||
thus it cannot compute before/after compaction size statistics).
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
|
@ -266,10 +264,12 @@ class CompactMixIn:
|
|||
description=self.do_compact.__doc__,
|
||||
epilog=compact_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="compact repository",
|
||||
help="compact the repository",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_compact)
|
||||
subparser.add_argument("-n", "--dry-run", dest="dry_run", action="store_true", help="do nothing")
|
||||
subparser.add_argument(
|
||||
"-n", "--dry-run", dest="dry_run", action="store_true", help="do not change the repository"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"-s", "--stats", dest="stats", action="store_true", help="print statistics (might be much slower)"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -42,7 +42,7 @@ logger = create_logger()
|
|||
class CreateMixIn:
|
||||
@with_repository(compatibility=(Manifest.Operation.WRITE,))
|
||||
def do_create(self, args, repository, manifest):
|
||||
"""Create new archive"""
|
||||
"""Creates a new archive."""
|
||||
key = manifest.key
|
||||
matcher = PatternMatcher(fallback=True)
|
||||
matcher.add_inclexcl(args.patterns)
|
||||
|
|
@ -552,8 +552,8 @@ class CreateMixIn:
|
|||
create_epilog = process_epilog(
|
||||
"""
|
||||
This command creates a backup archive containing all files found while recursively
|
||||
traversing all paths specified. Paths are added to the archive as they are given,
|
||||
that means if relative paths are desired, the command has to be run from the correct
|
||||
traversing all specified paths. Paths are added to the archive as they are given,
|
||||
which means that if relative paths are desired, the command must be run from the correct
|
||||
directory.
|
||||
|
||||
The slashdot hack in paths (recursion roots) is triggered by using ``/./``:
|
||||
|
|
@ -561,23 +561,23 @@ class CreateMixIn:
|
|||
strip the prefix on the left side of ``./`` from the archived items (in this case,
|
||||
``this/gets/archived`` will be the path in the archived item).
|
||||
|
||||
When giving '-' as path, borg will read data from standard input and create a
|
||||
file 'stdin' in the created archive from that data. In some cases it's more
|
||||
appropriate to use --content-from-command, however. See section *Reading from
|
||||
stdin* below for details.
|
||||
When specifying '-' as a path, borg will read data from standard input and create a
|
||||
file named 'stdin' in the created archive from that data. In some cases, it is more
|
||||
appropriate to use --content-from-command. See the section *Reading from stdin*
|
||||
below for details.
|
||||
|
||||
The archive will consume almost no disk space for files or parts of files that
|
||||
have already been stored in other archives.
|
||||
|
||||
The archive name does NOT need to be unique, you can and should use the same
|
||||
name for a series of archives. The unique archive identifier is its ID (hash)
|
||||
The archive name does not need to be unique; you can and should use the same
|
||||
name for a series of archives. The unique archive identifier is its ID (hash),
|
||||
and you can abbreviate the ID as long as it is unique.
|
||||
|
||||
In the archive name, you may use the following placeholders:
|
||||
{now}, {utcnow}, {fqdn}, {hostname}, {user} and some others.
|
||||
|
||||
Backup speed is increased by not reprocessing files that are already part of
|
||||
existing archives and weren't modified. The detection of unmodified files is
|
||||
existing archives and were not modified. The detection of unmodified files is
|
||||
done by comparing multiple file metadata values with previous values kept in
|
||||
the files cache.
|
||||
|
||||
|
|
@ -603,14 +603,14 @@ class CreateMixIn:
|
|||
ctime vs. mtime: safety vs. speed
|
||||
|
||||
- ctime is a rather safe way to detect changes to a file (metadata and contents)
|
||||
as it can not be set from userspace. But, a metadata-only change will already
|
||||
as it cannot be set from userspace. But a metadata-only change will already
|
||||
update the ctime, so there might be some unnecessary chunking/hashing even
|
||||
without content changes. Some filesystems do not support ctime (change time).
|
||||
E.g. doing a chown or chmod to a file will change its ctime.
|
||||
- mtime usually works and only updates if file contents were changed. But mtime
|
||||
can be arbitrarily set from userspace, e.g. to set mtime back to the same value
|
||||
can be arbitrarily set from userspace, e.g., to set mtime back to the same value
|
||||
it had before a content change happened. This can be used maliciously as well as
|
||||
well-meant, but in both cases mtime based cache modes can be problematic.
|
||||
well-meant, but in both cases mtime-based cache modes can be problematic.
|
||||
|
||||
The ``--files-changed`` option controls how Borg detects if a file has changed during backup:
|
||||
- ctime (default): Use ctime to detect changes. This is the safest option.
|
||||
|
|
@ -764,7 +764,7 @@ class CreateMixIn:
|
|||
description=self.do_create.__doc__,
|
||||
epilog=create_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="create backup",
|
||||
help="create a backup",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_create)
|
||||
|
||||
|
|
@ -778,7 +778,7 @@ class CreateMixIn:
|
|||
)
|
||||
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output verbose list of items (files, dirs, ...)"
|
||||
"--list", dest="output_list", action="store_true", help="output a verbose list of items (files, dirs, ...)"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--filter",
|
||||
|
|
@ -824,7 +824,7 @@ class CreateMixIn:
|
|||
subparser.add_argument(
|
||||
"--content-from-command",
|
||||
action="store_true",
|
||||
help="interpret PATH as command and store its stdout. See also section Reading from" " stdin below.",
|
||||
help="interpret PATH as a command and store its stdout. See also the section 'Reading from stdin' below.",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--paths-from-stdin",
|
||||
|
|
|
|||
|
|
@ -24,13 +24,13 @@ from ._common import process_epilog
|
|||
|
||||
class DebugMixIn:
|
||||
def do_debug_info(self, args):
|
||||
"""display system information for debugging / bug reports"""
|
||||
"""Displays system information for debugging and bug reports."""
|
||||
print(sysinfo())
|
||||
print("Process ID:", get_process_id())
|
||||
|
||||
@with_repository(compatibility=Manifest.NO_OPERATION_CHECK)
|
||||
def do_debug_dump_archive_items(self, args, repository, manifest):
|
||||
"""dump (decrypted, decompressed) archive items metadata (not: data)"""
|
||||
"""Dumps (decrypted, decompressed) archive item metadata (not data)."""
|
||||
repo_objs = manifest.repo_objs
|
||||
archive_info = manifest.archives.get_one([args.name])
|
||||
archive = Archive(manifest, archive_info.id)
|
||||
|
|
@ -44,7 +44,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(compatibility=Manifest.NO_OPERATION_CHECK)
|
||||
def do_debug_dump_archive(self, args, repository, manifest):
|
||||
"""dump decoded archive metadata (not: data)"""
|
||||
"""Dumps decoded archive metadata (not data)."""
|
||||
archive_info = manifest.archives.get_one([args.name])
|
||||
repo_objs = manifest.repo_objs
|
||||
try:
|
||||
|
|
@ -99,7 +99,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(compatibility=Manifest.NO_OPERATION_CHECK)
|
||||
def do_debug_dump_manifest(self, args, repository, manifest):
|
||||
"""dump decoded repository manifest"""
|
||||
"""Dumps decoded repository manifest."""
|
||||
repo_objs = manifest.repo_objs
|
||||
cdata = repository.get_manifest()
|
||||
_, data = repo_objs.parse(manifest.MANIFEST_ID, cdata, ro_type=ROBJ_MANIFEST)
|
||||
|
|
@ -111,7 +111,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(manifest=False)
|
||||
def do_debug_dump_repo_objs(self, args, repository):
|
||||
"""dump (decrypted, decompressed) repo objects"""
|
||||
"""Dumps (decrypted, decompressed) repository objects."""
|
||||
from ..crypto.key import key_factory
|
||||
|
||||
def decrypt_dump(id, cdata):
|
||||
|
|
@ -137,7 +137,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(manifest=False)
|
||||
def do_debug_search_repo_objs(self, args, repository):
|
||||
"""search for byte sequences in repo objects, repo index MUST be current/correct"""
|
||||
"""Searches for byte sequences in repository objects; the repository index MUST be current/correct."""
|
||||
context = 32
|
||||
|
||||
def print_finding(info, wanted, data, offset):
|
||||
|
|
@ -201,7 +201,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(manifest=False)
|
||||
def do_debug_get_obj(self, args, repository):
|
||||
"""get object contents from the repository and write it into file"""
|
||||
"""Gets object contents from the repository and writes them to a file."""
|
||||
hex_id = args.id
|
||||
try:
|
||||
id = hex_to_bin(hex_id, length=32)
|
||||
|
|
@ -217,7 +217,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(compatibility=Manifest.NO_OPERATION_CHECK)
|
||||
def do_debug_id_hash(self, args, repository, manifest):
|
||||
"""compute id-hash for file contents"""
|
||||
"""Computes id-hash for file contents."""
|
||||
with open(args.path, "rb") as f:
|
||||
data = f.read()
|
||||
key = manifest.key
|
||||
|
|
@ -226,7 +226,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(compatibility=Manifest.NO_OPERATION_CHECK)
|
||||
def do_debug_parse_obj(self, args, repository, manifest):
|
||||
"""parse borg object file into meta dict and data (decrypting, decompressing)"""
|
||||
"""Parses a Borg object file into a metadata dict and data (decrypting, decompressing)."""
|
||||
|
||||
# get the object from id
|
||||
hex_id = args.id
|
||||
|
|
@ -249,7 +249,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(compatibility=Manifest.NO_OPERATION_CHECK)
|
||||
def do_debug_format_obj(self, args, repository, manifest):
|
||||
"""format file and metadata into borg object file"""
|
||||
"""Formats file and metadata into a Borg object file."""
|
||||
|
||||
# get the object from id
|
||||
hex_id = args.id
|
||||
|
|
@ -273,7 +273,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(manifest=False)
|
||||
def do_debug_put_obj(self, args, repository):
|
||||
"""put file contents into the repository"""
|
||||
"""Puts file contents into the repository."""
|
||||
with open(args.path, "rb") as f:
|
||||
data = f.read()
|
||||
hex_id = args.id
|
||||
|
|
@ -287,7 +287,7 @@ class DebugMixIn:
|
|||
|
||||
@with_repository(manifest=False, exclusive=True)
|
||||
def do_debug_delete_obj(self, args, repository):
|
||||
"""delete the objects with the given IDs from the repo"""
|
||||
"""Deletes the objects with the given IDs from the repository."""
|
||||
for hex_id in args.ids:
|
||||
try:
|
||||
id = hex_to_bin(hex_id, length=32)
|
||||
|
|
@ -302,7 +302,7 @@ class DebugMixIn:
|
|||
print("Done.")
|
||||
|
||||
def do_debug_convert_profile(self, args):
|
||||
"""convert Borg profile to Python profile"""
|
||||
"""Converts a Borg profile to a Python profile."""
|
||||
import marshal
|
||||
|
||||
with open(args.output, "wb") as wfd, open(args.input, "rb") as rfd:
|
||||
|
|
@ -489,7 +489,7 @@ class DebugMixIn:
|
|||
# format_obj
|
||||
debug_format_obj_epilog = process_epilog(
|
||||
"""
|
||||
This command formats the file and metadata into objectfile.
|
||||
This command formats the file and metadata into a Borg object file.
|
||||
"""
|
||||
)
|
||||
subparser = debug_parsers.add_parser(
|
||||
|
|
@ -499,12 +499,12 @@ class DebugMixIn:
|
|||
description=self.do_debug_format_obj.__doc__,
|
||||
epilog=debug_format_obj_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="format file and metadata into borg objectfile",
|
||||
help="format file and metadata into a Borg object file",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_debug_format_obj)
|
||||
subparser.add_argument("id", metavar="ID", type=str, help="hex object ID to get from the repo")
|
||||
subparser.add_argument(
|
||||
"binary_path", metavar="BINARY_PATH", type=str, help="path of the file to convert into objectfile"
|
||||
"binary_path", metavar="BINARY_PATH", type=str, help="path of the file to convert into an object file"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"json_path", metavar="JSON_PATH", type=str, help="path of the json file to read metadata from"
|
||||
|
|
@ -523,7 +523,7 @@ class DebugMixIn:
|
|||
"object_path",
|
||||
metavar="OBJECT_PATH",
|
||||
type=str,
|
||||
help="path of the objectfile to write compressed encrypted data into",
|
||||
help="path of the object file to write compressed encrypted data into",
|
||||
)
|
||||
|
||||
debug_get_obj_epilog = process_epilog(
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ logger = create_logger()
|
|||
class DeleteMixIn:
|
||||
@with_repository(manifest=False)
|
||||
def do_delete(self, args, repository):
|
||||
"""Delete archives"""
|
||||
"""Deletes archives."""
|
||||
self.output_list = args.output_list
|
||||
dry_run = args.dry_run
|
||||
manifest = Manifest.load(repository, (Manifest.Operation.DELETE,))
|
||||
|
|
@ -73,9 +73,9 @@ class DeleteMixIn:
|
|||
|
||||
When in doubt, use ``--dry-run --list`` to see what would be deleted.
|
||||
|
||||
You can delete multiple archives by specifying a matching pattern,
|
||||
using the ``--match-archives PATTERN`` option (for more info on these patterns,
|
||||
see :ref:`borg_patterns`).
|
||||
You can delete multiple archives by specifying a match pattern using
|
||||
the ``--match-archives PATTERN`` option (for more information on these
|
||||
patterns, see :ref:`borg_patterns`).
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
|
@ -85,12 +85,14 @@ class DeleteMixIn:
|
|||
description=self.do_delete.__doc__,
|
||||
epilog=delete_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="delete archive",
|
||||
help="delete archives",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_delete)
|
||||
subparser.add_argument("-n", "--dry-run", dest="dry_run", action="store_true", help="do not change repository")
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output verbose list of archives"
|
||||
"-n", "--dry-run", dest="dry_run", action="store_true", help="do not change the repository"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output a verbose list of archives"
|
||||
)
|
||||
define_archive_filters_group(subparser)
|
||||
subparser.add_argument(
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ logger = create_logger()
|
|||
class DiffMixIn:
|
||||
@with_repository(compatibility=(Manifest.Operation.READ,))
|
||||
def do_diff(self, args, repository, manifest):
|
||||
"""Diff contents of two archives"""
|
||||
"""Finds differences between two archives."""
|
||||
|
||||
def actual_change(j):
|
||||
j = j.to_dict()
|
||||
|
|
@ -119,7 +119,7 @@ class DiffMixIn:
|
|||
The FORMAT specifier syntax
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
The ``--format`` option uses python's `format string syntax
|
||||
The ``--format`` option uses Python's `format string syntax
|
||||
<https://docs.python.org/3.9/library/string.html#formatstrings>`_.
|
||||
|
||||
Examples:
|
||||
|
|
@ -170,7 +170,7 @@ class DiffMixIn:
|
|||
"--same-chunker-params",
|
||||
dest="same_chunker_params",
|
||||
action="store_true",
|
||||
help="Override check of chunker parameters.",
|
||||
help="override the check of chunker parameters",
|
||||
)
|
||||
subparser.add_argument("--sort", dest="sort", action="store_true", help="Sort the output lines by file path.")
|
||||
subparser.add_argument(
|
||||
|
|
@ -180,7 +180,7 @@ class DiffMixIn:
|
|||
action=Highlander,
|
||||
help='specify format for differences between archives (default: "{change} {path}{NL}")',
|
||||
)
|
||||
subparser.add_argument("--json-lines", action="store_true", help="Format output as JSON Lines. ")
|
||||
subparser.add_argument("--json-lines", action="store_true", help="Format output as JSON Lines.")
|
||||
subparser.add_argument(
|
||||
"--content-only",
|
||||
action="store_true",
|
||||
|
|
@ -193,6 +193,6 @@ class DiffMixIn:
|
|||
metavar="PATH",
|
||||
nargs="*",
|
||||
type=PathSpec,
|
||||
help="paths of items inside the archives to compare; patterns are supported",
|
||||
help="paths of items inside the archives to compare; patterns are supported.",
|
||||
)
|
||||
define_exclusion_group(subparser)
|
||||
|
|
|
|||
|
|
@ -24,15 +24,14 @@ class ExtractMixIn:
|
|||
@with_repository(compatibility=(Manifest.Operation.READ,))
|
||||
@with_archive
|
||||
def do_extract(self, args, repository, manifest, archive):
|
||||
"""Extract archive contents"""
|
||||
"""Extracts archive contents."""
|
||||
# be restrictive when restoring files, restore permissions later
|
||||
if sys.getfilesystemencoding() == "ascii":
|
||||
logger.warning(
|
||||
'Warning: File system encoding is "ascii", extracting non-ascii filenames will not be supported.'
|
||||
)
|
||||
logger.warning('Warning: Filesystem encoding is "ascii"; extracting non-ASCII filenames is not supported.')
|
||||
if sys.platform.startswith(("linux", "freebsd", "netbsd", "openbsd", "darwin")):
|
||||
logger.warning(
|
||||
"Hint: You likely need to fix your locale setup. E.g. install locales and use: LANG=en_US.UTF-8"
|
||||
"Hint: You likely need to fix your locale setup. "
|
||||
"For example, install locales and use: LANG=en_US.UTF-8"
|
||||
)
|
||||
|
||||
matcher = build_matcher(args.patterns, args.paths)
|
||||
|
|
@ -50,7 +49,9 @@ class ExtractMixIn:
|
|||
filter = build_filter(matcher, strip_components)
|
||||
if progress:
|
||||
pi = ProgressIndicatorPercent(msg="%5.1f%% Extracting: %s", step=0.1, msgid="extract")
|
||||
pi.output("Calculating total archive size for the progress indicator (might take long for large archives)")
|
||||
pi.output(
|
||||
"Calculating total archive size for the progress indicator (might take a long time for large archives)"
|
||||
)
|
||||
extracted_size = sum(item.get_size() for item in archive.iter_items(filter))
|
||||
pi.total = extracted_size
|
||||
else:
|
||||
|
|
@ -126,16 +127,16 @@ class ExtractMixIn:
|
|||
|
||||
extract_epilog = process_epilog(
|
||||
"""
|
||||
This command extracts the contents of an archive. By default the entire
|
||||
archive is extracted but a subset of files and directories can be selected
|
||||
by passing a list of ``PATHs`` as arguments. The file selection can further
|
||||
be restricted by using the ``--exclude`` option.
|
||||
This command extracts the contents of an archive. By default, the entire
|
||||
archive is extracted, but a subset of files and directories can be selected
|
||||
by passing a list of ``PATH`` arguments. The file selection can be further
|
||||
restricted by using the ``--exclude`` option.
|
||||
|
||||
For more help on include/exclude patterns, see the :ref:`borg_patterns` command output.
|
||||
|
||||
By using ``--dry-run``, you can do all extraction steps except actually writing the
|
||||
output data: reading metadata and data chunks from the repo, checking the hash/hmac,
|
||||
decrypting, decompressing.
|
||||
output data: reading metadata and data chunks from the repository, checking the hash/HMAC,
|
||||
decrypting, and decompressing.
|
||||
|
||||
``--progress`` can be slower than no progress display, since it makes one additional
|
||||
pass over the archive metadata.
|
||||
|
|
@ -146,8 +147,8 @@ class ExtractMixIn:
|
|||
so make sure you ``cd`` to the right place before calling ``borg extract``.
|
||||
|
||||
When parent directories are not extracted (because of using file/directory selection
|
||||
or any other reason), borg can not restore parent directories' metadata, e.g. owner,
|
||||
group, permission, etc.
|
||||
or any other reason), Borg cannot restore parent directories' metadata, e.g., owner,
|
||||
group, permissions, etc.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
|
@ -161,22 +162,22 @@ class ExtractMixIn:
|
|||
)
|
||||
subparser.set_defaults(func=self.do_extract)
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output verbose list of items (files, dirs, ...)"
|
||||
"--list", dest="output_list", action="store_true", help="output a verbose list of items (files, dirs, ...)"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"-n", "--dry-run", dest="dry_run", action="store_true", help="do not actually change any files"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--numeric-ids",
|
||||
dest="numeric_ids",
|
||||
action="store_true",
|
||||
help="only obey numeric user and group identifiers",
|
||||
"--numeric-ids", dest="numeric_ids", action="store_true", help="only use numeric user and group identifiers"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--noflags", dest="noflags", action="store_true", help="do not extract/set flags (e.g. NODUMP, IMMUTABLE)"
|
||||
"--noflags",
|
||||
dest="noflags",
|
||||
action="store_true",
|
||||
help="do not extract or set flags (e.g. NODUMP, IMMUTABLE)",
|
||||
)
|
||||
subparser.add_argument("--noacls", dest="noacls", action="store_true", help="do not extract/set ACLs")
|
||||
subparser.add_argument("--noxattrs", dest="noxattrs", action="store_true", help="do not extract/set xattrs")
|
||||
subparser.add_argument("--noacls", dest="noacls", action="store_true", help="do not extract or set ACLs")
|
||||
subparser.add_argument("--noxattrs", dest="noxattrs", action="store_true", help="do not extract or set xattrs")
|
||||
subparser.add_argument(
|
||||
"--stdout", dest="stdout", action="store_true", help="write all extracted data to stdout"
|
||||
)
|
||||
|
|
@ -184,13 +185,13 @@ class ExtractMixIn:
|
|||
"--sparse",
|
||||
dest="sparse",
|
||||
action="store_true",
|
||||
help="create holes in output sparse file from all-zero chunks",
|
||||
help="create holes in the output sparse file from all-zero chunks",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--continue",
|
||||
dest="continue_extraction",
|
||||
action="store_true",
|
||||
help="continue a previously interrupted extraction of same archive",
|
||||
help="continue a previously interrupted extraction of the same archive",
|
||||
)
|
||||
subparser.add_argument("name", metavar="NAME", type=archivename_validator, help="specify the archive name")
|
||||
subparser.add_argument(
|
||||
|
|
|
|||
|
|
@ -68,13 +68,13 @@ class HelpMixIn:
|
|||
always normalized to a forward slash '/' before applying a pattern.
|
||||
|
||||
Path prefix, selector ``pp:``
|
||||
This pattern style is useful to match whole sub-directories. The pattern
|
||||
This pattern style is useful to match whole subdirectories. The pattern
|
||||
``pp:root/somedir`` matches ``root/somedir`` and everything therein.
|
||||
A leading path separator is always removed.
|
||||
|
||||
Path full-match, selector ``pf:``
|
||||
This pattern style is (only) useful to match full paths.
|
||||
This is kind of a pseudo pattern as it can not have any variable or
|
||||
This is kind of a pseudo pattern as it cannot have any variable or
|
||||
unspecified parts - the full path must be given. ``pf:root/file.ext``
|
||||
matches ``root/file.ext`` only. A leading path separator is always
|
||||
removed.
|
||||
|
|
@ -101,15 +101,15 @@ class HelpMixIn:
|
|||
from within a shell, the patterns should be quoted to protect them from
|
||||
expansion.
|
||||
|
||||
Patterns matching special characters, e.g. white space, within a shell may
|
||||
Patterns matching special characters, e.g. whitespace, within a shell may
|
||||
require adjustments, such as putting quotation marks around the arguments.
|
||||
Example:
|
||||
Example:
|
||||
Using bash, the following command line option would match and exclude "item name":
|
||||
``--pattern='-path/item name'``
|
||||
Note that when patterns are used within a pattern file directly read by borg,
|
||||
e.g. when using ``--exclude-from`` or ``--patterns-from``, there is no shell
|
||||
Note that when patterns are used within a pattern file directly read by borg,
|
||||
e.g. when using ``--exclude-from`` or ``--patterns-from``, there is no shell
|
||||
involved and thus no quotation marks are required.
|
||||
|
||||
|
||||
The ``--exclude-from`` option permits loading exclusion patterns from a text
|
||||
file with one pattern per line. Lines empty or starting with the hash sign
|
||||
'#' after removing whitespace on both ends are ignored. The optional style
|
||||
|
|
@ -216,7 +216,7 @@ class HelpMixIn:
|
|||
|
||||
.. note::
|
||||
|
||||
It's possible that a sub-directory/file is matched while parent
|
||||
It is possible that a subdirectory or file is matched while its parent
|
||||
directories are not. In that case, parent directories are not backed
|
||||
up and thus their user, group, permission, etc. cannot be restored.
|
||||
|
||||
|
|
@ -382,14 +382,14 @@ class HelpMixIn:
|
|||
)
|
||||
helptext["compression"] = textwrap.dedent(
|
||||
"""
|
||||
It is no problem to mix different compression methods in one repo,
|
||||
It is no problem to mix different compression methods in one repository,
|
||||
deduplication is done on the source data chunks (not on the compressed
|
||||
or encrypted data).
|
||||
|
||||
If some specific chunk was once compressed and stored into the repo, creating
|
||||
If some specific chunk was once compressed and stored into the repository, creating
|
||||
another backup that also uses this chunk will not change the stored chunk.
|
||||
So if you use different compression specs for the backups, whichever stores a
|
||||
chunk first determines its compression. See also borg recreate.
|
||||
chunk first determines its compression. See also ``borg recreate``.
|
||||
|
||||
Compression is lz4 by default. If you want something else, you have to specify what you want.
|
||||
|
||||
|
|
|
|||
|
|
@ -68,11 +68,11 @@ class InfoMixIn:
|
|||
|
||||
Please note that the deduplicated sizes of the individual archives do not add
|
||||
up to the deduplicated size of the repository ("all archives"), because the two
|
||||
are meaning different things:
|
||||
mean different things:
|
||||
|
||||
This archive / deduplicated size = amount of data stored ONLY for this archive
|
||||
= unique chunks of this archive.
|
||||
All archives / deduplicated size = amount of data stored in the repo
|
||||
All archives / deduplicated size = amount of data stored in the repository
|
||||
= all chunks in the repository.
|
||||
"""
|
||||
)
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ logger = create_logger(__name__)
|
|||
class KeysMixIn:
|
||||
@with_repository(compatibility=(Manifest.Operation.CHECK,))
|
||||
def do_change_passphrase(self, args, repository, manifest):
|
||||
"""Change repository key file passphrase"""
|
||||
"""Changes the repository key file passphrase."""
|
||||
key = manifest.key
|
||||
if not hasattr(key, "change_passphrase"):
|
||||
raise CommandError("This repository is not encrypted, cannot change the passphrase.")
|
||||
|
|
@ -31,7 +31,7 @@ class KeysMixIn:
|
|||
|
||||
@with_repository(exclusive=True, manifest=True, cache=True, compatibility=(Manifest.Operation.CHECK,))
|
||||
def do_change_location(self, args, repository, manifest, cache):
|
||||
"""Change repository key location"""
|
||||
"""Changes the repository key location."""
|
||||
key = manifest.key
|
||||
if not hasattr(key, "change_passphrase"):
|
||||
raise CommandError("This repository is not encrypted, cannot change the key location.")
|
||||
|
|
@ -85,7 +85,7 @@ class KeysMixIn:
|
|||
|
||||
@with_repository(lock=False, manifest=False, cache=False)
|
||||
def do_key_export(self, args, repository):
|
||||
"""Export the repository key for backup"""
|
||||
"""Exports the repository key for backup."""
|
||||
manager = KeyManager(repository)
|
||||
manager.load_keyblob()
|
||||
try:
|
||||
|
|
@ -104,15 +104,15 @@ class KeysMixIn:
|
|||
|
||||
@with_repository(lock=False, manifest=False, cache=False)
|
||||
def do_key_import(self, args, repository):
|
||||
"""Import the repository key from backup"""
|
||||
"""Imports the repository key from backup."""
|
||||
manager = KeyManager(repository)
|
||||
if args.paper:
|
||||
if args.path:
|
||||
raise CommandError("with --paper import from file is not supported")
|
||||
raise CommandError("with --paper, import from file is not supported")
|
||||
manager.import_paperkey(args)
|
||||
else:
|
||||
if not args.path:
|
||||
raise CommandError("expected input file to import key from")
|
||||
raise CommandError("expected input file to import the key from")
|
||||
if args.path != "-" and not os.path.exists(args.path):
|
||||
raise CommandError(f"input file does not exist: {args.path}")
|
||||
manager.import_keyfile(args)
|
||||
|
|
@ -124,10 +124,10 @@ class KeysMixIn:
|
|||
"key",
|
||||
parents=[mid_common_parser],
|
||||
add_help=False,
|
||||
description="Manage a keyfile or repokey of a repository",
|
||||
description="Manage the keyfile or repokey of a repository",
|
||||
epilog="",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="manage repository key",
|
||||
help="manage the repository key",
|
||||
)
|
||||
|
||||
key_parsers = subparser.add_subparsers(title="required arguments", metavar="<command>")
|
||||
|
|
@ -165,7 +165,7 @@ class KeysMixIn:
|
|||
description=self.do_key_export.__doc__,
|
||||
epilog=key_export_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="export repository key for backup",
|
||||
help="export the repository key for backup",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_key_export)
|
||||
subparser.add_argument("path", metavar="PATH", nargs="?", type=PathSpec, help="where to store the backup")
|
||||
|
|
@ -179,7 +179,7 @@ class KeysMixIn:
|
|||
"--qr-html",
|
||||
dest="qr",
|
||||
action="store_true",
|
||||
help="Create an html file suitable for printing and later type-in or qr scan",
|
||||
help="Create an HTML file suitable for printing and later type-in or QR scan",
|
||||
)
|
||||
|
||||
key_import_epilog = process_epilog(
|
||||
|
|
@ -207,7 +207,7 @@ class KeysMixIn:
|
|||
description=self.do_key_import.__doc__,
|
||||
epilog=key_import_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="import repository key from backup",
|
||||
help="import the repository key from backup",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_key_import)
|
||||
subparser.add_argument(
|
||||
|
|
@ -238,16 +238,16 @@ class KeysMixIn:
|
|||
description=self.do_change_passphrase.__doc__,
|
||||
epilog=change_passphrase_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="change repository passphrase",
|
||||
help="change the repository passphrase",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_change_passphrase)
|
||||
|
||||
change_location_epilog = process_epilog(
|
||||
"""
|
||||
Change the location of a borg key. The key can be stored at different locations:
|
||||
Change the location of a Borg key. The key can be stored at different locations:
|
||||
|
||||
- keyfile: locally, usually in the home directory
|
||||
- repokey: inside the repo (in the repo config)
|
||||
- repokey: inside the repository (in the repository config)
|
||||
|
||||
Please note:
|
||||
|
||||
|
|
@ -262,7 +262,7 @@ class KeysMixIn:
|
|||
description=self.do_change_location.__doc__,
|
||||
epilog=change_location_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="change key location",
|
||||
help="change the key location",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_change_location)
|
||||
subparser.add_argument(
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ logger = create_logger()
|
|||
class ListMixIn:
|
||||
@with_repository(compatibility=(Manifest.Operation.READ,))
|
||||
def do_list(self, args, repository, manifest):
|
||||
"""List archive contents"""
|
||||
"""List archive contents."""
|
||||
matcher = build_matcher(args.patterns, args.paths)
|
||||
if args.format is not None:
|
||||
format = args.format
|
||||
|
|
@ -64,14 +64,14 @@ class ListMixIn:
|
|||
"""
|
||||
This command lists the contents of an archive.
|
||||
|
||||
For more help on include/exclude patterns, see the :ref:`borg_patterns` command output.
|
||||
For more help on include/exclude patterns, see the output of :ref:`borg_patterns`.
|
||||
|
||||
.. man NOTES
|
||||
|
||||
The FORMAT specifier syntax
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
The ``--format`` option uses python's `format string syntax
|
||||
The ``--format`` option uses Python's `format string syntax
|
||||
<https://docs.python.org/3.9/library/string.html#formatstrings>`_.
|
||||
|
||||
Examples:
|
||||
|
|
@ -132,11 +132,7 @@ class ListMixIn:
|
|||
"Some keys are always present. Note: JSON can only represent text.",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--depth",
|
||||
metavar="N",
|
||||
dest="depth",
|
||||
type=int,
|
||||
help="only list files up to the specified directory level depth",
|
||||
"--depth", metavar="N", dest="depth", type=int, help="only list files up to the specified directory depth"
|
||||
)
|
||||
subparser.add_argument("name", metavar="NAME", type=archivename_validator, help="specify the archive name")
|
||||
subparser.add_argument(
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ logger = create_logger()
|
|||
class LocksMixIn:
|
||||
@with_repository(manifest=False, exclusive=True)
|
||||
def do_with_lock(self, args, repository):
|
||||
"""run a user specified command with the repository lock held"""
|
||||
"""Runs a user-specified command with the repository lock held."""
|
||||
# the repository lock needs to get refreshed regularly, or it will be killed as stale.
|
||||
# refreshing the lock is not part of the repository API, so we do it indirectly via repository.info.
|
||||
lock_refreshing_thread = ThreadRunner(sleep_interval=60, target=repository.info)
|
||||
|
|
@ -25,13 +25,13 @@ class LocksMixIn:
|
|||
rc = subprocess.call([args.command] + args.args, env=env) # nosec B603
|
||||
set_ec(rc)
|
||||
except (FileNotFoundError, OSError, ValueError) as e:
|
||||
raise CommandError(f"Error while trying to run '{args.command}': {e}")
|
||||
raise CommandError(f"Failed to execute command: {e}")
|
||||
finally:
|
||||
lock_refreshing_thread.terminate()
|
||||
|
||||
@with_repository(lock=False, manifest=False)
|
||||
def do_break_lock(self, args, repository):
|
||||
"""Break the repository lock (e.g. in case it was left by a dead borg."""
|
||||
"""Breaks the repository lock (for example, if it was left by a dead Borg process)."""
|
||||
repository.break_lock()
|
||||
Cache.break_lock(repository)
|
||||
|
||||
|
|
@ -41,8 +41,8 @@ class LocksMixIn:
|
|||
break_lock_epilog = process_epilog(
|
||||
"""
|
||||
This command breaks the repository and cache locks.
|
||||
Please use carefully and only while no borg process (on any machine) is
|
||||
trying to access the Cache or the Repository.
|
||||
Use with care and only when no Borg process (on any machine) is
|
||||
trying to access the cache or the repository.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
|
@ -52,7 +52,7 @@ class LocksMixIn:
|
|||
description=self.do_break_lock.__doc__,
|
||||
epilog=break_lock_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="break repository and cache locks",
|
||||
help="break the repository and cache locks",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_break_lock)
|
||||
|
||||
|
|
@ -64,16 +64,16 @@ class LocksMixIn:
|
|||
|
||||
$ BORG_REPO=/mnt/borgrepo borg with-lock rsync -av /mnt/borgrepo /somewhere/else/borgrepo
|
||||
|
||||
It will first try to acquire the lock (make sure that no other operation is
|
||||
running in the repo), then execute the given command as a subprocess and wait
|
||||
for its termination, release the lock and return the user command's return
|
||||
code as borg's return code.
|
||||
It first tries to acquire the lock (make sure that no other operation is
|
||||
running in the repository), then executes the given command as a subprocess and waits
|
||||
for its termination, releases the lock, and returns the user command's return
|
||||
code as Borg's return code.
|
||||
|
||||
.. note::
|
||||
|
||||
If you copy a repository with the lock held, the lock will be present in
|
||||
the copy. Thus, before using borg on the copy from a different host,
|
||||
you need to use "borg break-lock" on the copied repository, because
|
||||
the copy. Before using Borg on the copy from a different host,
|
||||
you need to run ``borg break-lock`` on the copied repository, because
|
||||
Borg is cautious and does not automatically remove stale locks made by a different host.
|
||||
"""
|
||||
)
|
||||
|
|
@ -84,7 +84,7 @@ class LocksMixIn:
|
|||
description=self.do_with_lock.__doc__,
|
||||
epilog=with_lock_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="run user command with lock held",
|
||||
help="run a user command with the lock held",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_with_lock)
|
||||
subparser.add_argument("command", metavar="COMMAND", help="command to run")
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ logger = create_logger()
|
|||
|
||||
class MountMixIn:
|
||||
def do_mount(self, args):
|
||||
"""Mount archive or an entire repository as a FUSE filesystem"""
|
||||
"""Mounts an archive or an entire repository as a FUSE filesystem."""
|
||||
# Perform these checks before opening the repository and asking for a passphrase.
|
||||
|
||||
from ..fuse_impl import llfuse, BORG_FUSE_IMPL
|
||||
|
|
@ -46,7 +46,7 @@ class MountMixIn:
|
|||
raise RTError("FUSE mount failed")
|
||||
|
||||
def do_umount(self, args):
|
||||
"""un-mount the FUSE filesystem"""
|
||||
"""Unmounts the FUSE filesystem."""
|
||||
umount(args.mountpoint)
|
||||
|
||||
def build_parser_mount_umount(self, subparsers, common_parser, mid_common_parser):
|
||||
|
|
@ -64,15 +64,15 @@ class MountMixIn:
|
|||
archives and the directory structure below these will be loaded on-demand from
|
||||
the repository when entering these directories, so expect some delay.
|
||||
|
||||
Unless the ``--foreground`` option is given the command will run in the
|
||||
background until the filesystem is ``umounted``.
|
||||
Unless the ``--foreground`` option is given, the command will run in the
|
||||
background until the filesystem is ``unmounted``.
|
||||
|
||||
Performance tips:
|
||||
|
||||
- when doing a "whole repository" mount:
|
||||
do not enter archive dirs if not needed, this avoids on-demand loading.
|
||||
- only mount a specific archive, not the whole repository.
|
||||
- only mount specific paths in a specific archive, not the complete archive.
|
||||
- When doing a "whole repository" mount:
|
||||
do not enter archive directories if not needed; this avoids on-demand loading.
|
||||
- Only mount a specific archive, not the whole repository.
|
||||
- Only mount specific paths in a specific archive, not the complete archive.
|
||||
|
||||
The command ``borgfs`` provides a wrapper for ``borg mount``. This can also be
|
||||
used in fstab entries:
|
||||
|
|
@ -84,35 +84,35 @@ class MountMixIn:
|
|||
For FUSE configuration and mount options, see the mount.fuse(8) manual page.
|
||||
|
||||
Borg's default behavior is to use the archived user and group names of each
|
||||
file and map them to the system's respective user and group ids.
|
||||
file and map them to the system's respective user and group IDs.
|
||||
Alternatively, using ``numeric-ids`` will instead use the archived user and
|
||||
group ids without any mapping.
|
||||
group IDs without any mapping.
|
||||
|
||||
The ``uid`` and ``gid`` mount options (implemented by Borg) can be used to
|
||||
override the user and group ids of all files (i.e., ``borg mount -o
|
||||
override the user and group IDs of all files (i.e., ``borg mount -o
|
||||
uid=1000,gid=1000``).
|
||||
|
||||
The man page references ``user_id`` and ``group_id`` mount options
|
||||
(implemented by fuse) which specify the user and group id of the mount owner
|
||||
(aka, the user who does the mounting). It is set automatically by libfuse (or
|
||||
(implemented by FUSE) which specify the user and group ID of the mount owner
|
||||
(also known as the user who does the mounting). It is set automatically by libfuse (or
|
||||
the filesystem if libfuse is not used). However, you should not specify these
|
||||
manually. Unlike the ``uid`` and ``gid`` mount options which affect all files,
|
||||
``user_id`` and ``group_id`` affect the user and group id of the mounted
|
||||
manually. Unlike the ``uid`` and ``gid`` mount options, which affect all files,
|
||||
``user_id`` and ``group_id`` affect the user and group ID of the mounted
|
||||
(base) directory.
|
||||
|
||||
Additional mount options supported by borg:
|
||||
Additional mount options supported by Borg:
|
||||
|
||||
- ``versions``: when used with a repository mount, this gives a merged, versioned
|
||||
view of the files in the archives. EXPERIMENTAL, layout may change in future.
|
||||
- ``allow_damaged_files``: by default damaged files (where chunks are missing)
|
||||
view of the files in the archives. EXPERIMENTAL; layout may change in the future.
|
||||
- ``allow_damaged_files``: by default, damaged files (where chunks are missing)
|
||||
will return EIO (I/O error) when trying to read the related parts of the file.
|
||||
Set this option to replace the missing parts with all-zero bytes.
|
||||
- ``ignore_permissions``: for security reasons the ``default_permissions`` mount
|
||||
option is internally enforced by borg. ``ignore_permissions`` can be given to
|
||||
option is internally enforced by Borg. ``ignore_permissions`` can be given to
|
||||
not enforce ``default_permissions``.
|
||||
|
||||
The BORG_MOUNT_DATA_CACHE_ENTRIES environment variable is meant for advanced users
|
||||
to tweak the performance. It sets the number of cached data chunks; additional
|
||||
The BORG_MOUNT_DATA_CACHE_ENTRIES environment variable is intended for advanced users
|
||||
to tweak performance. It sets the number of cached data chunks; additional
|
||||
memory usage can be up to ~8 MiB times this number. The default is the number
|
||||
of CPU cores.
|
||||
|
||||
|
|
@ -131,13 +131,13 @@ class MountMixIn:
|
|||
description=self.do_mount.__doc__,
|
||||
epilog=mount_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="mount repository",
|
||||
help="mount a repository",
|
||||
)
|
||||
self._define_borg_mount(subparser)
|
||||
|
||||
umount_epilog = process_epilog(
|
||||
"""
|
||||
This command un-mounts a FUSE filesystem that was mounted with ``borg mount``.
|
||||
This command unmounts a FUSE filesystem that was mounted with ``borg mount``.
|
||||
|
||||
This is a convenience wrapper that just calls the platform-specific shell
|
||||
command - usually this is either umount or fusermount -u.
|
||||
|
|
@ -150,11 +150,11 @@ class MountMixIn:
|
|||
description=self.do_umount.__doc__,
|
||||
epilog=umount_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="umount repository",
|
||||
help="unmount a repository",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_umount)
|
||||
subparser.add_argument(
|
||||
"mountpoint", metavar="MOUNTPOINT", type=str, help="mountpoint of the filesystem to umount"
|
||||
"mountpoint", metavar="MOUNTPOINT", type=str, help="mountpoint of the filesystem to unmount"
|
||||
)
|
||||
|
||||
def build_parser_borgfs(self, parser):
|
||||
|
|
@ -162,7 +162,7 @@ class MountMixIn:
|
|||
parser.description = self.do_mount.__doc__
|
||||
parser.epilog = "For more information, see borg mount --help."
|
||||
parser.formatter_class = argparse.RawDescriptionHelpFormatter
|
||||
parser.help = "mount repository"
|
||||
parser.help = "mount a repository"
|
||||
self._define_borg_mount(parser)
|
||||
return parser
|
||||
|
||||
|
|
@ -170,16 +170,16 @@ class MountMixIn:
|
|||
from ._common import define_exclusion_group, define_archive_filters_group
|
||||
|
||||
parser.set_defaults(func=self.do_mount)
|
||||
parser.add_argument("mountpoint", metavar="MOUNTPOINT", type=str, help="where to mount filesystem")
|
||||
parser.add_argument("mountpoint", metavar="MOUNTPOINT", type=str, help="where to mount the filesystem")
|
||||
parser.add_argument(
|
||||
"-f", "--foreground", dest="foreground", action="store_true", help="stay in foreground, do not daemonize"
|
||||
)
|
||||
parser.add_argument("-o", dest="options", type=str, action=Highlander, help="Extra mount options")
|
||||
parser.add_argument("-o", dest="options", type=str, action=Highlander, help="extra mount options")
|
||||
parser.add_argument(
|
||||
"--numeric-ids",
|
||||
dest="numeric_ids",
|
||||
action="store_true",
|
||||
help="use numeric user and group identifiers from archive(s)",
|
||||
help="use numeric user and group identifiers from archives",
|
||||
)
|
||||
define_archive_filters_group(parser)
|
||||
parser.add_argument(
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ def prune_split(archives, rule, n, kept_because=None):
|
|||
class PruneMixIn:
|
||||
@with_repository(compatibility=(Manifest.Operation.DELETE,))
|
||||
def do_prune(self, args, repository, manifest):
|
||||
"""Prune repository archives according to specified rules"""
|
||||
"""Prune repository archives according to specified rules."""
|
||||
if not any(
|
||||
(
|
||||
args.secondly,
|
||||
|
|
@ -280,12 +280,14 @@ class PruneMixIn:
|
|||
description=self.do_prune.__doc__,
|
||||
epilog=prune_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="prune archives",
|
||||
help="prune repository archives",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_prune)
|
||||
subparser.add_argument("-n", "--dry-run", dest="dry_run", action="store_true", help="do not change repository")
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output verbose list of archives it keeps/prunes"
|
||||
"-n", "--dry-run", dest="dry_run", action="store_true", help="do not change the repository"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output a verbose list of archives it keeps/prunes"
|
||||
)
|
||||
subparser.add_argument("--short", dest="short", action="store_true", help="use a less wide archive part format")
|
||||
subparser.add_argument(
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ logger = create_logger()
|
|||
class RecreateMixIn:
|
||||
@with_repository(cache=True, compatibility=(Manifest.Operation.CHECK,))
|
||||
def do_recreate(self, args, repository, manifest, cache):
|
||||
"""Re-create archives"""
|
||||
"""Recreate archives."""
|
||||
matcher = build_matcher(args.patterns, args.paths)
|
||||
self.output_list = args.output_list
|
||||
self.output_filter = args.output_filter
|
||||
|
|
@ -63,15 +63,15 @@ class RecreateMixIn:
|
|||
"""
|
||||
Recreate the contents of existing archives.
|
||||
|
||||
recreate is a potentially dangerous function and might lead to data loss
|
||||
Recreate is a potentially dangerous function and might lead to data loss
|
||||
(if used wrongly). BE VERY CAREFUL!
|
||||
|
||||
Important: Repository disk space is **not** freed until you run ``borg compact``.
|
||||
|
||||
``--exclude``, ``--exclude-from``, ``--exclude-if-present``, ``--keep-exclude-tags``
|
||||
and PATH have the exact same semantics as in "borg create", but they only check
|
||||
for files in the archives and not in the local file system. If PATHs are specified,
|
||||
the resulting archives will only contain files from these PATHs.
|
||||
files in the archives and not in the local filesystem. If paths are specified,
|
||||
the resulting archives will contain only files from those paths.
|
||||
|
||||
Note that all paths in an archive are relative, therefore absolute patterns/paths
|
||||
will *not* match (``--exclude``, ``--exclude-from``, PATHs).
|
||||
|
|
@ -80,9 +80,9 @@ class RecreateMixIn:
|
|||
used to have upgraded Borg 0.xx archives deduplicate with Borg 1.x archives.
|
||||
|
||||
**USE WITH CAUTION.**
|
||||
Depending on the PATHs and patterns given, recreate can be used to
|
||||
Depending on the paths and patterns given, recreate can be used to
|
||||
delete files from archives permanently.
|
||||
When in doubt, use ``--dry-run --verbose --list`` to see how patterns/PATHS are
|
||||
When in doubt, use ``--dry-run --verbose --list`` to see how patterns/paths are
|
||||
interpreted. See :ref:`list_item_flags` in ``borg create`` for details.
|
||||
|
||||
The archive being recreated is only removed after the operation completes. The
|
||||
|
|
@ -97,8 +97,8 @@ class RecreateMixIn:
|
|||
|
||||
If your most recent borg check found missing chunks, please first run another
|
||||
backup for the same data, before doing any rechunking. If you are lucky, that
|
||||
will re-create the missing chunks. Optionally, do another borg check, to see
|
||||
if the chunks are still missing).
|
||||
will recreate the missing chunks. Optionally, do another borg check to see
|
||||
if the chunks are still missing.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class RenameMixIn:
|
|||
@with_repository(cache=True, compatibility=(Manifest.Operation.CHECK,))
|
||||
@with_archive
|
||||
def do_rename(self, args, repository, manifest, cache, archive):
|
||||
"""Rename an existing archive"""
|
||||
"""Rename an existing archive."""
|
||||
archive.rename(args.newname)
|
||||
manifest.write()
|
||||
|
||||
|
|
@ -35,10 +35,12 @@ class RenameMixIn:
|
|||
description=self.do_rename.__doc__,
|
||||
epilog=rename_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="rename archive",
|
||||
help="rename an archive",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_rename)
|
||||
subparser.add_argument("name", metavar="OLDNAME", type=archivename_validator, help="specify the archive name")
|
||||
subparser.add_argument(
|
||||
"name", metavar="OLDNAME", type=archivename_validator, help="specify the current archive name"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"newname", metavar="NEWNAME", type=archivename_validator, help="specify the new archive name"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ logger = create_logger()
|
|||
|
||||
|
||||
def find_chunks(repository, repo_objs, cache, stats, ctype, clevel, olevel):
|
||||
"""find and flag chunks that need processing (usually: recompression)."""
|
||||
"""Find and flag chunks that need processing (usually: recompression)."""
|
||||
compr_keys = stats["compr_keys"] = set()
|
||||
compr_wanted = ctype, clevel, olevel
|
||||
recompress_count = 0
|
||||
|
|
@ -35,7 +35,7 @@ def find_chunks(repository, repo_objs, cache, stats, ctype, clevel, olevel):
|
|||
|
||||
|
||||
def process_chunks(repository, repo_objs, stats, recompress_ids, olevel):
|
||||
"""process some chunks (usually: recompress)"""
|
||||
"""Process some chunks (usually: recompress)."""
|
||||
compr_keys = stats["compr_keys"]
|
||||
if compr_keys == 0: # work around defaultdict(int)
|
||||
compr_keys = stats["compr_keys"] = set()
|
||||
|
|
@ -87,7 +87,7 @@ def format_compression_spec(ctype, clevel, olevel):
|
|||
class RepoCompressMixIn:
|
||||
@with_repository(cache=True, manifest=True, compatibility=(Manifest.Operation.CHECK,))
|
||||
def do_repo_compress(self, args, repository, manifest, cache):
|
||||
"""Repository (re-)compression"""
|
||||
"""Repository (re-)compression."""
|
||||
|
||||
def get_csettings(c):
|
||||
if isinstance(c, Auto):
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class RepoCreateMixIn:
|
|||
@with_repository(create=True, exclusive=True, manifest=False)
|
||||
@with_other_repository(manifest=True, compatibility=(Manifest.Operation.READ,))
|
||||
def do_repo_create(self, args, repository, *, other_repository=None, other_manifest=None):
|
||||
"""Create a new, empty repository"""
|
||||
"""Creates a new, empty repository."""
|
||||
other_key = other_manifest.key if other_manifest is not None else None
|
||||
path = args.location.canonical_path()
|
||||
logger.info('Initializing repository at "%s"' % path)
|
||||
|
|
@ -36,18 +36,18 @@ class RepoCreateMixIn:
|
|||
if key.NAME != "plaintext":
|
||||
logger.warning(
|
||||
"\n"
|
||||
"IMPORTANT: you will need both KEY AND PASSPHRASE to access this repo!\n"
|
||||
"IMPORTANT: you will need both KEY AND PASSPHRASE to access this repository!\n"
|
||||
"\n"
|
||||
"Key storage location depends on the mode:\n"
|
||||
"- repokey modes: key is stored in the repository directory.\n"
|
||||
"- keyfile modes: key is stored in the home directory of this user.\n"
|
||||
"\n"
|
||||
"For any mode, you should:\n"
|
||||
"1. Export the borg key and store the result at a safe place:\n"
|
||||
"1. Export the Borg key and store the result in a safe place:\n"
|
||||
" borg key export -r REPOSITORY encrypted-key-backup\n"
|
||||
" borg key export -r REPOSITORY --paper encrypted-key-backup.txt\n"
|
||||
" borg key export -r REPOSITORY --qr-html encrypted-key-backup.html\n"
|
||||
"2. Write down the borg key passphrase and store it at safe place."
|
||||
"2. Write down the Borg key passphrase and store it in a safe place."
|
||||
)
|
||||
logger.warning(
|
||||
"\n"
|
||||
|
|
@ -68,7 +68,7 @@ class RepoCreateMixIn:
|
|||
this is due to borgstore pre-creating all directories needed, making usage of the
|
||||
store faster.
|
||||
|
||||
Encryption mode TLDR
|
||||
Encryption mode TL;DR
|
||||
++++++++++++++++++++
|
||||
|
||||
The encryption mode can only be configured when creating a new repository - you can
|
||||
|
|
@ -94,7 +94,7 @@ class RepoCreateMixIn:
|
|||
"leaving your keys inside your car" (see :ref:`borg_key_export`).
|
||||
The encryption is done locally - if you use a remote repository, the remote machine
|
||||
never sees your passphrase, your unencrypted key or your unencrypted files.
|
||||
Chunking and id generation are also based on your key to improve
|
||||
Chunking and ID generation are also based on your key to improve
|
||||
your privacy.
|
||||
7. Use the key when extracting files to decrypt them and to verify that the contents of
|
||||
the backups have not been accidentally or maliciously altered.
|
||||
|
|
@ -104,27 +104,27 @@ class RepoCreateMixIn:
|
|||
|
||||
Make sure you use a good passphrase. Not too short, not too simple. The real
|
||||
encryption / decryption key is encrypted with / locked by your passphrase.
|
||||
If an attacker gets your key, he can't unlock and use it without knowing the
|
||||
If an attacker gets your key, they cannot unlock and use it without knowing the
|
||||
passphrase.
|
||||
|
||||
Be careful with special or non-ascii characters in your passphrase:
|
||||
Be careful with special or non-ASCII characters in your passphrase:
|
||||
|
||||
- Borg processes the passphrase as unicode (and encodes it as utf-8),
|
||||
- Borg processes the passphrase as Unicode (and encodes it as UTF-8),
|
||||
so it does not have problems dealing with even the strangest characters.
|
||||
- BUT: that does not necessarily apply to your OS / VM / keyboard configuration.
|
||||
- BUT: that does not necessarily apply to your OS/VM/keyboard configuration.
|
||||
|
||||
So better use a long passphrase made from simple ascii chars than one that
|
||||
includes non-ascii stuff or characters that are hard/impossible to enter on
|
||||
So better use a long passphrase made from simple ASCII characters than one that
|
||||
includes non-ASCII stuff or characters that are hard or impossible to enter on
|
||||
a different keyboard layout.
|
||||
|
||||
You can change your passphrase for existing repos at any time, it won't affect
|
||||
You can change your passphrase for existing repositories at any time; it will not affect
|
||||
the encryption/decryption key or other secrets.
|
||||
|
||||
Choosing an encryption mode
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
Depending on your hardware, hashing and crypto performance may vary widely.
|
||||
The easiest way to find out about what's fastest is to run ``borg benchmark cpu``.
|
||||
The easiest way to find out what is fastest is to run ``borg benchmark cpu``.
|
||||
|
||||
`repokey` modes: if you want ease-of-use and "passphrase" security is good enough -
|
||||
the key will be stored in the repository (in ``repo_dir/config``).
|
||||
|
|
@ -157,14 +157,14 @@ class RepoCreateMixIn:
|
|||
|
||||
.. nanorst: inline-replace
|
||||
|
||||
`none` mode uses no encryption and no authentication. You're advised NOT to use this mode
|
||||
`none` mode uses no encryption and no authentication. You are advised NOT to use this mode
|
||||
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
|
||||
case of malicious activity in the repository.
|
||||
|
||||
If you do **not** want to encrypt the contents of your backups, but still want to detect
|
||||
malicious tampering use an `authenticated` mode. It's like `repokey` minus encryption.
|
||||
To normally work with ``authenticated`` repos, you will need the passphrase, but
|
||||
there is an emergency workaround, see ``BORG_WORKAROUNDS=authenticated_no_key`` docs.
|
||||
malicious tampering, use an `authenticated` mode. It is like `repokey` minus encryption.
|
||||
To normally work with ``authenticated`` repositories, you will need the passphrase, but
|
||||
there is an emergency workaround; see ``BORG_WORKAROUNDS=authenticated_no_key`` docs.
|
||||
|
||||
Creating a related repository
|
||||
+++++++++++++++++++++++++++++
|
||||
|
|
@ -176,12 +176,12 @@ class RepoCreateMixIn:
|
|||
for deduplication) and the AE crypto keys will be newly generated random keys.
|
||||
|
||||
Optionally, if you use ``--copy-crypt-key`` you can also keep the same crypt_key
|
||||
(used for authenticated encryption). Might be desired e.g. if you want to have less
|
||||
(used for authenticated encryption). This might be desired, for example, if you want to have fewer
|
||||
keys to manage.
|
||||
|
||||
Creating related repositories is useful e.g. if you want to use ``borg transfer`` later.
|
||||
Creating related repositories is useful, for example, if you want to use ``borg transfer`` later.
|
||||
|
||||
Creating a related repository for data migration from borg 1.2 or 1.4
|
||||
Creating a related repository for data migration from Borg 1.2 or 1.4
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
You can use ``borg repo-create --other-repo ORIG_REPO --from-borg1 ...`` to create a related
|
||||
|
|
@ -210,7 +210,7 @@ class RepoCreateMixIn:
|
|||
help="reuse the key material from the other repository",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--from-borg1", dest="v1_or_v2", action="store_true", help="other repository is borg 1.x"
|
||||
"--from-borg1", dest="v1_or_v2", action="store_true", help="other repository is Borg 1.x"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"-e",
|
||||
|
|
@ -226,6 +226,6 @@ class RepoCreateMixIn:
|
|||
"--copy-crypt-key",
|
||||
dest="copy_crypt_key",
|
||||
action="store_true",
|
||||
help="copy the crypt_key (used for authenticated encryption) from the key of the other repo "
|
||||
help="copy the crypt_key (used for authenticated encryption) from the key of the other repository "
|
||||
"(default: new random key).",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ logger = create_logger()
|
|||
class RepoDeleteMixIn:
|
||||
@with_repository(exclusive=True, manifest=False)
|
||||
def do_repo_delete(self, args, repository):
|
||||
"""Delete a repository"""
|
||||
"""Deletes the repository."""
|
||||
self.output_list = args.output_list
|
||||
dry_run = args.dry_run
|
||||
keep_security_info = args.keep_security_info
|
||||
|
|
@ -54,10 +54,10 @@ class RepoDeleteMixIn:
|
|||
for archive_info in manifest.archives.list(sort_by=["ts"]):
|
||||
msg.append(format_archive(archive_info))
|
||||
else:
|
||||
msg.append("This repository seems not to have any archives.")
|
||||
msg.append("This repository does not appear to have any archives.")
|
||||
else:
|
||||
msg.append(
|
||||
"This repository seems to have no manifest, so we can't "
|
||||
"This repository seems to have no manifest, so we cannot "
|
||||
"tell anything about its contents."
|
||||
)
|
||||
|
||||
|
|
@ -109,19 +109,21 @@ class RepoDeleteMixIn:
|
|||
description=self.do_repo_delete.__doc__,
|
||||
epilog=repo_delete_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="delete repository",
|
||||
help="delete the repository",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_repo_delete)
|
||||
subparser.add_argument("-n", "--dry-run", dest="dry_run", action="store_true", help="do not change repository")
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output verbose list of archives"
|
||||
"-n", "--dry-run", dest="dry_run", action="store_true", help="do not change the repository"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output a verbose list of archives"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--force",
|
||||
dest="forced",
|
||||
action="count",
|
||||
default=0,
|
||||
help="force deletion of corrupted archives, " "use ``--force --force`` in case ``--force`` does not work.",
|
||||
help="force deletion of corrupted archives; use ``--force --force`` if a single ``--force`` does not work.",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--cache-only",
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ logger = create_logger()
|
|||
class RepoInfoMixIn:
|
||||
@with_repository(cache=True, compatibility=(Manifest.Operation.READ,))
|
||||
def do_repo_info(self, args, repository, manifest, cache):
|
||||
"""Show repository infos"""
|
||||
"""Show repository information."""
|
||||
key = manifest.key
|
||||
info = basic_json_data(manifest, cache=cache, extra={"security_dir": cache.security_manager.dir})
|
||||
|
||||
|
|
@ -37,7 +37,7 @@ class RepoInfoMixIn:
|
|||
Location: {location}
|
||||
Repository version: {version}
|
||||
{encryption}
|
||||
Security dir: {security_dir}
|
||||
Security directory: {security_dir}
|
||||
"""
|
||||
)
|
||||
.strip()
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ logger = create_logger()
|
|||
class RepoListMixIn:
|
||||
@with_repository(compatibility=(Manifest.Operation.READ,))
|
||||
def do_repo_list(self, args, repository, manifest):
|
||||
"""List the archives contained in a repository"""
|
||||
"""List the archives contained in a repository."""
|
||||
if args.format is not None:
|
||||
format = args.format
|
||||
elif args.short:
|
||||
|
|
@ -52,7 +52,7 @@ class RepoListMixIn:
|
|||
The FORMAT specifier syntax
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
The ``--format`` option uses python's `format string syntax
|
||||
The ``--format`` option uses Python's `format string syntax
|
||||
<https://docs.python.org/3.9/library/string.html#formatstrings>`_.
|
||||
|
||||
Examples:
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ logger = create_logger()
|
|||
class RepoSpaceMixIn:
|
||||
@with_repository(lock=False, manifest=False)
|
||||
def do_repo_space(self, args, repository):
|
||||
"""Manage reserved space in repository"""
|
||||
"""Manages reserved space in the repository."""
|
||||
# we work without locking here because locks don't work with full disk.
|
||||
if args.reserve_space > 0:
|
||||
storage_space_reserve_object_size = 64 * 2**20 # 64 MiB per object
|
||||
|
|
@ -35,7 +35,7 @@ class RepoSpaceMixIn:
|
|||
if info.name.startswith("space-reserve."):
|
||||
size += info.size
|
||||
repository.store_delete(f"config/{info.name}")
|
||||
print(f"Freed {format_file_size(size, iec=False)} in repository.")
|
||||
print(f"Freed {format_file_size(size, iec=False)} in the repository.")
|
||||
print("Now run borg prune or borg delete plus borg compact to free more space.")
|
||||
print("After that, do not forget to reserve space again for next time!")
|
||||
else: # print amount currently reserved
|
||||
|
|
@ -56,24 +56,23 @@ class RepoSpaceMixIn:
|
|||
"""
|
||||
This command manages reserved space in a repository.
|
||||
|
||||
Borg can not work in disk-full conditions (can not lock a repo and thus can
|
||||
not run prune/delete or compact operations to free disk space).
|
||||
Borg cannot work in disk-full conditions (it cannot lock a repository and thus cannot
|
||||
run prune/delete or compact operations to free disk space).
|
||||
|
||||
To avoid running into dead-end situations like that, you can put some objects
|
||||
into a repository that take up some disk space. If you ever run into a
|
||||
disk-full situation, you can free that space and then borg will be able to
|
||||
run normally, so you can free more disk space by using prune/delete/compact.
|
||||
After that, don't forget to reserve space again, in case you run into that
|
||||
situation again at a later time.
|
||||
To avoid running into such dead-end situations, you can put some objects into a
|
||||
repository that take up disk space. If you ever run into a disk-full situation, you
|
||||
can free that space, and then Borg will be able to run normally so you can free more
|
||||
disk space by using ``borg prune``/``borg delete``/``borg compact``. After that, do
|
||||
not forget to reserve space again, in case you run into that situation again later.
|
||||
|
||||
Examples::
|
||||
|
||||
# Create a new repository:
|
||||
$ borg repo-create ...
|
||||
# Reserve approx. 1GB of space for emergencies:
|
||||
# Reserve approx. 1 GiB of space for emergencies:
|
||||
$ borg repo-space --reserve 1G
|
||||
|
||||
# Check amount of reserved space in the repository:
|
||||
# Check the amount of reserved space in the repository:
|
||||
$ borg repo-space
|
||||
|
||||
# EMERGENCY! Free all reserved space to get things back to normal:
|
||||
|
|
@ -84,7 +83,7 @@ class RepoSpaceMixIn:
|
|||
$ borg repo-space --reserve 1G # reserve space again for next time
|
||||
|
||||
|
||||
Reserved space is always rounded up to use full reservation blocks of 64MiB.
|
||||
Reserved space is always rounded up to full reservation blocks of 64 MiB.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
|
@ -110,5 +109,5 @@ class RepoSpaceMixIn:
|
|||
"--free",
|
||||
dest="free_space",
|
||||
action="store_true",
|
||||
help="Free all reserved space. Don't forget to reserve space later again.",
|
||||
help="Free all reserved space. Do not forget to reserve space again later.",
|
||||
)
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue