Project

General

Profile

Actions

Bug #7655

open

mentat-backup doesn't delete old files

Added by Rajmund Hruška over 1 year ago. Updated over 1 year ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Development - Core
Target version:
Start date:
05/17/2023
Due date:
% Done:

0%

Estimated time:
To be discussed:
No

Description

As it was found out by Jakub Judiny, mentat-backup.py contains this block of code:

if False and not self.c(self.CONFIG_NO_UPLOAD):
    # Mount the remote directory to local directory.
    self._remote_mount(self.c(self.CONFIG_MOUNT_POINT))

    # Move the backup file to remote storage.
    self._copy_to_remote(backup_file, self.c(self.CONFIG_MOUNT_POINT), self.c(self.CONFIG_INTERVAL))

    # Trim the locally present backup files.
    self._clean_local_backups(backup_mask, backup_file)

    # Unmount the remote directory from local directory.
    self._remote_unmount(self.c(self.CONFIG_MOUNT_POINT))

    # Update persistent state.
    self.pstate['ts_last_' + self.c(self.CONFIG_INTERVAL)] = time.time()

Obviously, this code is never executed. This results in low disk space as the old backups are not deleted.

Actions #1

Updated by Pavel Kácha over 1 year ago

Looking into it - wouldn't make sense to replace this by shell script of about tenth of size?

Seems to me it's quite a lot of python sugar and boilerplate to mount, pg_dump, tar, cp, rm, umount. Especially when this is thingie which should be set up very carefully and for very specific environment by very specific admin (how about I don't wanna mount+cp, but use already set up duplicity or borg?)

Actions #2

Updated by Rajmund Hruška over 1 year ago

I don't mind removing/rewriting this module.

Also, I feel like the we could possibly lose some data. The backup is done in incremental manner, so if we delete some older backups, we could lose data.

Actions #3

Updated by Pavel Kácha over 1 year ago

From 2023-06-22 meeting:

Let's ditch all the parts trying to backup the data onto some external storage (let's leave this to standard machine backup means - duplicity, borg, tar, whatever admin prefers and sets up). It's enough to have a simplified module, dumping regularly chunk of new data and possibly deleting old chunks.

Note - the file creation should be atomic (like creating as dotfile/tempfile/whatever and than renaming to final name), so the external backup tools can see just previous or final state, not partial file with final name.

Actions

Also available in: Atom PDF