I am trying to setup a restic job to backup my docker stacks, and with half of everything owned by root it becomes problematic. I’ve been wanting to look at podman so everything isn’t owned by root, but for now I want to backup my work I built.

Also, how do you deal with some docker containers having databases. Do you have to create exports for all docker containers that have some form of database?

I’ve spent the last few days moving all my docker containers to a dedicated machine. I was using a mix of NFS and local storage before, but now I am doing everything on local NVME. My original plan was having everything on NFS so I would worry about backups there, and I might go back to that.

  • esturniolo@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    KISS method: Script that copy the data on the fly to the /tmp dir, compress it, encrypt it and move it to destination using rclone. Running every hour, 4 hours or 24 hours, depending the container.

    Never fails. The backups nor the restore.

  • linxbro5000@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I have a backup-script (running as root) that

    • stops all containers
    • runs the restic-backup
    • starts all containers
    • mirisbowring@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I had this before but this created struggles with some containers since they do start specific checks and scans during startup which resulted in high cpu and disk load.

      Since unraid supports zfs, i am using this for the docker stuff and do snapshots to external disk as backup

      no need to stop containers anymorw

  • root-node@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    For backups I use Nautical Backup.

    For the “owned by root” problem, I ensure all my docker compose files have [P]UID and [P]GID set to 1000 (the user my docker runs under). All my 20 containers have no issue running like this.

    How are you launching your containers? Docker compose is the way, I have set the following in all mine:

    environment:
      - PUID=1000
      - PGID=1000
    
    user:
      1000:1000
    
    • Not_your_guy_buddy42@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Hey, this is where I am stuck just now: I want to keep the docker volumes, as bind mounts, also on my NAS share. If the containers run as a separate non root user (say 1001) then I can mount that share as 1001… sounds good right?

      But somebody suggested running each container from their own user. But then I would need lots of differently owned directories. I wonder if I could keep mounting subdirs of the same NAS share, as different users, so each of them can have their own file access? Perhaps that is overkill.

      (For OP: I’ve been on a selfhosting binge the past week and trying to work my way in at least the general direction of best practice… At least for the container databases I’ve been starting to use tiredofit/docker-db-backup (does database dumps) but also discovered this jdfranel docker backup as well which looks great as well. I save the dumps on a volume mounted from NAS. btrfs and there is a folder replication (snapshots) tool. So far, so good. )

  • PaulEngineer-89@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Don’t backup the container!!

    Map volumes with your data to physical storage and then simply backup those folders with the rest of your data. Docker containers are already either backed up in your development directory (if you wrote them) or GitHub so like the operating system itself, no need tk backup anything. The whole idea of Docker is the containers are ephemeral. They are reset at every reboot.

  • SnakeBDD@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    All Docker containers have their persistent data in Docker volumes located on a BTRFS mount. A cronjob takes a snapshot of the BTRFS volume, then calls btrfs send, pipes that through tar and gpg and then directly to AWS S3.