Support Nextcloud / WebDav (#48)

* add studio-b12/gowebdav to be able to upload to webdav server

* make sure all env variables are present for webdav upload

* implement file upload to WebDav server

directory defaults to the base directory

* docs: add the new feature to the documentation

* if no WebDav env variable are given throw no error

* docs: use more elegant english :D

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>

* docs: use official spelling of "WebDAV"

* perf: golang likes to return early instead of having an else block

* use WEBDAV_PATH instead of WEBDAV_DIRECTORY

* use split_words for more convenience

like shown here: https://github.com/kelseyhightower/envconfig#struct-tag-support

* simplify

* feat: add pruning of files in WebDAV remote

Based on / Inspired by the minio/S3 implementation of pruning remote files.

* remove logging from the development

* test: first try implementing tests

Sandly I have to use the remote pipeline -- local wont work for me.

* test: adapt used volume names

* test: specify image only once!

* test: minio AND webdav data should be present

* test: backups on WebDAV remote are laying in the root directory

* test: the webdav server stores date in /var/lib/dav

* trying with data subfolder

* test: 1 container was added so the number raised from 3 to 4

* webdav  subfolder is "data" not "backup"

* fix: password AND username must be defined

not password OR username

* improve logging

* feat: if the given path on the server isnt preset it will be created

* test: add creation of new folder for webdav to tests

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>
This commit is contained in:
Kaerbr 2022-01-22 13:29:21 +01:00 committed by GitHub
parent 6b79f1914b
commit 6ded00aa06
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 145 additions and 15 deletions

View File

@ -7,7 +7,7 @@
Backup Docker volumes locally or to any S3 compatible storage. Backup Docker volumes locally or to any S3 compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup. The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__. It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3 or WebDAV compatible storage (or any combination) and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
<!-- MarkdownTOC --> <!-- MarkdownTOC -->
@ -28,6 +28,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Recipes](#recipes) - [Recipes](#recipes)
- [Backing up to AWS S3](#backing-up-to-aws-s3) - [Backing up to AWS S3](#backing-up-to-aws-s3)
- [Backing up to MinIO](#backing-up-to-minio) - [Backing up to MinIO](#backing-up-to-minio)
- [Backing up to WebDAV](#backing-up-to-webdav)
- [Backing up locally](#backing-up-locally) - [Backing up locally](#backing-up-locally)
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally) - [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule) - [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
@ -189,6 +190,24 @@ You can populate below template according to your requirements and use it as you
# AWS_ENDPOINT_INSECURE="true" # AWS_ENDPOINT_INSECURE="true"
# In addition, you can also backup files to any WebDAV server.
# The URL of the remote WebDAV server
# WEBDAV_URL="https://webdav.example.com"
# The Directory to place the backups to on the WebDAV server.
# If the path is not present on the server it will be created!
# WEBDAV_PATH="/my/directory/"
# The username for the WebDAV server
# WEBDAV_USERNAME="user"
# The password for the WebDAV server
# WEBDAV_PASSWORD="password"
# In addition to storing backups remotely, you can also keep local copies. # In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to # Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive` # mount a local folder or Docker volume into that location (`/archive`
@ -530,6 +549,28 @@ volumes:
data: data:
``` ```
### Backing up to WebDAV
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
WEBDAV_URL: https://webdav.mydomain.me
WEBDAV_PATH: /my/directory/
WEBDAV_USERNAME: user
WEBDAV_PASSWORD: password
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up locally ### Backing up locally
```yml ```yml

View File

@ -9,6 +9,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"io/fs"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@ -32,6 +33,7 @@ import (
"github.com/otiai10/copy" "github.com/otiai10/copy"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"golang.org/x/crypto/openpgp" "golang.org/x/crypto/openpgp"
"github.com/studio-b12/gowebdav"
) )
func main() { func main() {
@ -86,12 +88,13 @@ func main() {
// script holds all the stateful information required to orchestrate a // script holds all the stateful information required to orchestrate a
// single backup run. // single backup run.
type script struct { type script struct {
cli *client.Client cli *client.Client
mc *minio.Client mc *minio.Client
logger *logrus.Logger webdavClient *gowebdav.Client
sender *router.ServiceRouter logger *logrus.Logger
hooks []hook sender *router.ServiceRouter
hookLevel hookLevel hooks []hook
hookLevel hookLevel
start time.Time start time.Time
file string file string
@ -127,6 +130,10 @@ type config struct {
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"` EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"` EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"` EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUrl string `split_words:"true"`
WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"`
} }
var msgBackupFailed = "backup run failed" var msgBackupFailed = "backup run failed"
@ -209,6 +216,17 @@ func newScript() (*script, error) {
s.mc = mc s.mc = mc
} }
// WebDAV check for env variables
// WebDAV instanciate client
if s.c.WebdavUrl != "" {
if s.c.WebdavUsername == "" || s.c.WebdavPassword == "" {
return nil, errors.New("newScript: WEBDAV_URL is defined, but no credentials were provided")
} else {
webdavClient := gowebdav.NewClient(s.c.WebdavUrl, s.c.WebdavUsername, s.c.WebdavPassword)
s.webdavClient = webdavClient
}
}
if s.c.EmailNotificationRecipient != "" { if s.c.EmailNotificationRecipient != "" {
emailURL := fmt.Sprintf( emailURL := fmt.Sprintf(
"smtp://%s:%s@%s:%d/?from=%s&to=%s", "smtp://%s:%s@%s:%d/?from=%s&to=%s",
@ -517,6 +535,21 @@ func (s *script) copyBackup() error {
s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`.", s.file, s.c.AwsS3BucketName) s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`.", s.file, s.c.AwsS3BucketName)
} }
// WebDAV file upload
if s.webdavClient != nil {
bytes, err := os.ReadFile(s.file)
if err != nil {
return fmt.Errorf("copyBackup: error reading the file to be uploaded: %w", err)
}
if err := s.webdavClient.MkdirAll(s.c.WebdavPath, 0644); err != nil {
return fmt.Errorf("copyBackup: error creating directory '%s' on WebDAV server: %w", s.c.WebdavPath, err)
}
if err := s.webdavClient.Write(filepath.Join(s.c.WebdavPath, name), bytes, 0644); err != nil {
return fmt.Errorf("copyBackup: error uploading the file to WebDAV server: %w", err)
}
s.logger.Infof("Uploaded a copy of backup `%s` to WebDAV-URL '%s' at path '%s'.", s.file, s.c.WebdavUrl, s.c.WebdavPath)
}
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) { if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
if err := copyFile(s.file, path.Join(s.c.BackupArchive, name)); err != nil { if err := copyFile(s.file, path.Join(s.c.BackupArchive, name)); err != nil {
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err) return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
@ -551,6 +584,7 @@ func (s *script) pruneOldBackups() error {
deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays)) deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays))
// Prune minio/S3 backups
if s.mc != nil { if s.mc != nil {
candidates := s.mc.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{ candidates := s.mc.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{
WithMetadata: true, WithMetadata: true,
@ -612,6 +646,38 @@ func (s *script) pruneOldBackups() error {
} }
} }
// Prune WebDAV backups
if s.webdavClient != nil {
candidates, err := s.webdavClient.ReadDir(s.c.WebdavPath)
if err != nil {
return fmt.Errorf("pruneOldBackups: error looking up candidates from remote storage: %w", err)
}
var matches []fs.FileInfo
var lenCandidates int
for _, candidate := range candidates {
lenCandidates++
if candidate.ModTime().Before(deadline) {
matches = append(matches, candidate)
}
}
if len(matches) != 0 && len(matches) != lenCandidates {
for _, match := range matches {
if err := s.webdavClient.Remove(filepath.Join(s.c.WebdavPath, match.Name())); err != nil {
return fmt.Errorf("pruneOldBackups: error removing a file from remote storage: %w", err)
}
s.logger.Infof("Pruned %s from WebDAV: %s", match.Name(), filepath.Join(s.c.WebdavUrl, s.c.WebdavPath))
}
s.logger.Infof("Pruned %d out of %d remote backup(s) as their age exceeded the configured retention period of %d days.", len(matches), lenCandidates, s.c.BackupRetentionDays)
} else if len(matches) != 0 && len(matches) == lenCandidates {
s.logger.Warnf("The current configuration would delete all %d remote backup copies.", len(matches))
s.logger.Warn("Refusing to do so, please check your configuration.")
} else {
s.logger.Infof("None of %d remote backup(s) were pruned.", lenCandidates)
}
}
// Prune local backups
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) { if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
globPattern := path.Join( globPattern := path.Join(
s.c.BackupArchive, s.c.BackupArchive,

1
go.mod
View File

@ -45,6 +45,7 @@ require (
github.com/opencontainers/image-spec v1.0.1 // indirect github.com/opencontainers/image-spec v1.0.1 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/rs/xid v1.3.0 // indirect github.com/rs/xid v1.3.0 // indirect
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f // indirect
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 // indirect golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 // indirect
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 // indirect golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 // indirect
golang.org/x/text v0.3.6 // indirect golang.org/x/text v0.3.6 // indirect

2
go.sum
View File

@ -659,6 +659,8 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0= github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f h1:L2NE7BXnSlSLoNYZ0lCwZDjdnYjCNYC71k9ClZUTFTs=
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=

View File

@ -10,13 +10,23 @@ services:
MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ
entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server /data' entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server /data'
volumes: volumes:
- backup_data:/data - minio_backup_data:/data
webdav:
image: bytemark/webdav:2.4
environment:
AUTH_TYPE: Digest
USERNAME: test
PASSWORD: test
volumes:
- webdav_backup_data:/var/lib/dav
backup: &default_backup_service backup: &default_backup_service
image: offen/docker-volume-backup:${TEST_VERSION} image: offen/docker-volume-backup:${TEST_VERSION}
hostname: hostnametoken hostname: hostnametoken
depends_on: depends_on:
- minio - minio
- webdav
restart: always restart: always
environment: environment:
AWS_ACCESS_KEY_ID: test AWS_ACCESS_KEY_ID: test
@ -32,6 +42,10 @@ services:
BACKUP_PRUNING_LEEWAY: 5s BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test BACKUP_PRUNING_PREFIX: test
GPG_PASSPHRASE: 1234secret GPG_PASSPHRASE: 1234secret
WEBDAV_URL: http://webdav/
WEBDAV_PATH: /my/new/path/
WEBDAV_USERNAME: test
WEBDAV_PASSWORD: test
volumes: volumes:
- ./local:/archive - ./local:/archive
- app_data:/backup/app_data:ro - app_data:/backup/app_data:ro
@ -45,5 +59,6 @@ services:
- app_data:/var/opt/offen - app_data:/var/opt/offen
volumes: volumes:
backup_data: minio_backup_data:
webdav_backup_data:
app_data: app_data:

View File

@ -13,10 +13,13 @@ docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker-compose exec backup backup docker-compose exec backup backup
docker run --rm -it \ docker run --rm -it \
-v compose_backup_data:/data alpine \ -v compose_minio_backup_data:/minio_data \
ash -c 'apk add gnupg && echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' -v compose_webdav_backup_data:/webdav_data alpine \
ash -c 'apk add gnupg && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /minio_data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /webdav_data/data/my/new/path/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
echo "[TEST:PASS] Found relevant files in untared remote backup." echo "[TEST:PASS] Found relevant files in untared remote backups."
test -L ./local/test-hostnametoken.latest.tar.gz.gpg test -L ./local/test-hostnametoken.latest.tar.gz.gpg
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test-hostnametoken.tar.gz.gpg > ./local/decrypted.tar.gz echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test-hostnametoken.tar.gz.gpg > ./local/decrypted.tar.gz
@ -26,7 +29,7 @@ test -L /tmp/backup/app_data/db.link
echo "[TEST:PASS] Found relevant files in untared local backup." echo "[TEST:PASS] Found relevant files in untared local backup."
if [ "$(docker-compose ps -q | wc -l)" != "3" ]; then if [ "$(docker-compose ps -q | wc -l)" != "4" ]; then
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:" echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
docker-compose ps docker-compose ps
exit 1 exit 1
@ -43,8 +46,10 @@ sleep 5
docker-compose exec backup backup docker-compose exec backup backup
docker run --rm -it \ docker run --rm -it \
-v compose_backup_data:/data alpine \ -v compose_minio_backup_data:/minio_data \
ash -c '[ $(find /data/backup/ -type f | wc -l) = "1" ]' -v compose_webdav_backup_data:/webdav_data alpine \
ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ] && \
[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]'
echo "[TEST:PASS] Remote backups have not been deleted." echo "[TEST:PASS] Remote backups have not been deleted."