Bareos / Bacula

systemctl stop bareos-dir bareos-fd bareos-sd
systemctl start bareos-dir bareos-fd bareos-sd
systemctl status bareos-dir bareos-fd bareos-sd

Configuration

Retention periods

In Client and Pool resources:

File retention (default 60 days): https://docs.bareos.org/Configuration/Director.html#config-Dir_Client_FileRetention

Job Retention (default 180 days): https://docs.bareos.org/Configuration/Director.html#config-Dir_Client_JobRetention

you normally will set the File retention period to be less than the Job retention period. The Job retention period can actually be less than the value you specify here if you set the Volume Retention (Dir->Pool) directive to a smaller duration. This is because the Job retention period and the Volume retention period are independently applied, so the smaller of the two takes precedence.

In Pool resource:

Volume Retention (default 365 days): https://docs.bareos.org/Configuration/Director.html#config-Dir_Pool_VolumeRetention

Prune

Auto prune happens when there’s no available medium

Auto Prune: In Client resource, prunes expired jobs and files at the end of the job, default no

Auto Prune: In Pool resource, prunes volumes if none is found -> expire jobs+files and possibly recycles volume, default yes

Duplicates

  Allow Duplicate Jobs = No
  Cancel Lower Level Duplicates = Yes
  Cancel Queued Duplicates = Yes
  Cancel Running Duplicates = No

Concurrent jobs

Many resources set a maximum possible simultaneous jobs:

DirectivesDefault
Maximum Concurrent Jobs (Dir->Director)1
Maximum Concurrent Jobs (Dir->Client)1
Maximum Concurrent Jobs (Dir->Job)1
Maximum Concurrent Jobs (Dir->Storage)1
Maximum Concurrent Jobs (Sd->Storage)20
Maximum Concurrent Jobs (Sd->Device)??
Maximum Concurrent Jobs (Fd->Client)20

https://docs.bareos.org/Appendix/Troubleshooting.html#concurrentjobs

Utilities

bconsole – Bareos’s management Console

status slots
purge volume=000384
update volume=000384 pool=Archive2020 volstatus=Recycle
list jobs jobstatus=T client=filer04-fd


mount storage=filer04-Scalar_i40_Autochanger slot=5 drive=0
release storage=filer04-Scalar_i40_Drive0

The list nextvol command will print the Volume name to be used by the specified job.

list nextvol

documentation: https://docs.bareos.org/TasksAndConcepts/BareosConsole.html

btape – Bareos’s Tape interface test program

The storage daemon must not be running ! (or it will hold the drive)

$ btape /dev/tape/by-id/scsi-35001438023b486c2-nst
Tape block granularity is 1024 bytes.
btape: stored/butil.cc:293-0 Using device: "/dev/tape/by-id/scsi-35001438023b486c2-nst" for writing.
btape: stored/btape.cc:481-0 open device "HP-Ultrium-5-SCSI-0" (/dev/tape/by-id/scsi-35001438023b486c2-nst): OK
*speed skip_random,skip_raw,skip_block

https://www.bacula.org/11.0.x-manuals/en/problems/Testing_Your_Tape_Drive_Wit.html#blb:btape1

Tasks

Set up storage

SQL backup

Bareos Web Interface

Autochanger

Get the current contents of the slots

update slots
status slots

Import/Export volumes

export volume=BPR001L5
export volume=000335|000346
export storage=<storage-name> srcslots=<slot-selection> [dstslots=<slot-selection> volume=<volume-name> scan]

If there are enough empty slots in the library:

import
import srcslots=47
import storage=Autochanger volume=000734
import storage=<storage-name> [srcslots=<slot-selection> dstslots=<slot-selection> volume=<volume-name> scan]

Move a volume between slots

move storage=Autochanger srcslots=1 dstslots=4

Load a volume in a drive

mount storage=Autochanger slot=19 drive=0

Eject a volume from a drive

release storage=Autochanger drive=0

Label a single volume

label storage=my-comp-sd pool=LTO-Full volume=FUL010L5

Label a range of tapes using barcodes

label storage=Autochanger pool=Scratch slots=6-20 barcodes
label storage=Autochanger drive=0 pool=Unused slots=6,10,13,14,19 barcodes yes

Pools

Propagate changes

If you change the Pool definition, you manually have to call update pool command in the console program to propagate the changes to existing volumes.

Schedules

https://docs.bareos.org/Configuration/Director.html#schedule-resource

Create a manual schedule

Manual schedule:

Schedule {
  Name = "Manual"
}

Show what is currently scheduled

*status schedule=TwoWeeksPerSet days=7
Scheduler Jobs:

Schedule               Jobs Triggered
===========================================================
TwoWeeksPerSet
                       backup_projects

====

Scheduler Preview for 7 days:

Date                   Schedule                Overrides
==============================================================
Fri 04-Dec-2020 23:00  TwoWeeksPerSet          Level=Incremental Pool=BackupSetA
Mon 07-Dec-2020 23:00  TwoWeeksPerSet          Level=Incremental Pool=BackupSetA
Tue 08-Dec-2020 23:00  TwoWeeksPerSet          Level=Incremental Pool=BackupSetA
Wed 09-Dec-2020 23:00  TwoWeeksPerSet          Level=Incremental Pool=BackupSetA
Thu 10-Dec-2020 23:00  TwoWeeksPerSet          Level=Incremental Pool=BackupSetA
====

show schedule

*show schedule=TwoWeeksPerSet
Schedule {
  Name = "TwoWeeksPerSet"
  run = pool="BackupSetA" Differential Fri w00,w04,w08,w12,w16,w20,w24,w28,w32,w36,w40,w44,w48,w52 at 23:00
  run = pool="BackupSetA" Incremental Mon-Fri w01,w05,w09,w13,w17,w21,w25,w29,w33,w37,w41,w45,w49,w53 at 23:00
  run = pool="BackupSetA" Incremental Mon-Thu w02,w06,w10,w14,w18,w22,w26,w30,w34,w38,w42,w46,w50 at 23:00
  run = pool="BackupSetB" Differential Fri w02,w06,w10,w14,w18,w22,w26,w30,w34,w38,w42,w46,w50 at 23:00
  run = pool="BackupSetB" Incremental Mon-Fri w03,w07,w11,w15,w19,w23,w27,w31,w35,w39,w43,w47,w51 at 23:00
  run = pool="BackupSetB" Incremental Mon-Thu w00,w04,w08,w12,w16,w20,w24,w28,w32,w36,w40,w44,w48,w52 at 23:00
}

Examples

Schedule {
  Name = "TwoWeeksPerSet"
  Run = Level=Differential Pool=BackupSetA w01/w04 fri at 23:00
  Run = Level=Incremental Pool=BackupSetA w02/w04 mon-fri at 23:00
  Run = Level=Incremental Pool=BackupSetA w03/w04 mon-thu at 23:00
  
  Run = Level=Differential Pool=BackupSetB w03/w04 fri at 23:00
  Run = Level=Incremental Pool=BackupSetB w04/w04 mon-fri at 23:00
  Run = Level=Incremental Pool=BackupSetB w01/w04 mon-thu at 23:00
}

Jobs

https://docs.bareos.org/Configuration/Director.html#job-resource

Get the log of a job

list joblog jobid=9

List jobs with a certain status

list jobs jobstatus=T client=filer04-fd

Cancel a job on a storage daemon

cancel storage=Autochanger Jobid=17

Cancel a range of jobs

for i in 127{1..5} ; do echo cancel JobId=$i ; done | bconsole
for i in {267..392} ; do echo cancel JobId=$i ; done | bconsole

Delete all records for aborted jobs

for jobid in $(echo "llist jobs client=san-fd jobstatus=A" | bconsole | grep -E "^\s+jobid" | cut -d ':' -f 2 | tr -d ' ') ; do echo delete jobid=$jobid ; done | bconsole

Volumes

Get a volume back

List the jobs on it:

list jobs volume=BPR001L5

Delete those jobs:

delete jobid=8

Prune the volume:

*prune volume=BPR001L5 yes
The current Volume retention period is: 1 year
There are no more Jobs associated with Volume "BPR001L5". Marking it purged.
You have messages.

https://docs.bareos.org/TasksAndConcepts/BareosConsole.html#console-commands

Truncate a Purged file volume

USAGE: truncate volstatus=Purged [storage=<storage>] [pool=<pool>] [volume=<volume>] [yes]
*truncate volstatus=Purged storage=File pool=Full volume=Full-0015
+---------+------------+------+---------+-----------+---------------------+----------+----------------+-----------+---------------+---------+
| MediaId | VolumeName | Pool | Storage | MediaType | LastWritten         | VolFiles | VolBytes       | VolStatus | ActionOnPurge | Comment |
+---------+------------+------+---------+-----------+---------------------+----------+----------------+-----------+---------------+---------+
|      15 | Full-0015  | Full | File    | File      | 2020-12-03 23:05:25 |       12 | 52,173,333,467 | Purged    |             0 | NULL    |
+---------+------------+------+---------+-----------+---------------------+----------+----------------+-----------+---------------+---------+
Truncate listed volumes (yes/no)? yes
Connecting to Storage daemon File at backup:9103 ...
Sending relabel command from "Full-0015" to "Full-0015" ...
3000 OK label. VolBytes=190 Volume="Full-0015" Device="FileStorage" (/var/lib/bareos/storage)
The volume 'Full-0015' has been truncated.

Action on every volume in a pool

Purge all volumes in pool Archive2021 and move them to pool scratch:

for volname in $(echo "llist media pool=Archive2021" |bconsole | grep VolumeName | cut -d ':' -f 2 | tr -d ' '); \
do \
echo purge volume=$volname |bconsole; \
echo update volume=$volname pool=Scratch |bconsole; \
done

Delete all volumes in pool Scratch from the catalog:

for volname in $(echo "llist media pool=Scratch" |bconsole | grep VolumeName | cut -d ':' -f 2 | tr -d ' '); \
do \
echo delete volume=$volname yes |bconsole; \
done

Update retention period

*update volume=000018L8 volretention=1209600
New retention period is: 14 days

Relabel tapes

Relabel tapes (“3920 Cannot label Volume because it is already labeled”):

for slot in {11..20}; \
do \
	mtx -f /dev/tape/by-id/scsi-1QUANTUM_D0H0281913_LLA load $slot 1 && \
	mt -f /dev/tape/by-id/scsi-3500308c38ce84004-nst rewind && \
	mt -f /dev/tape/by-id/scsi-3500308c38ce84004-nst weof && \
	mtx -f /dev/tape/by-id/scsi-1QUANTUM_D0H0281913_LLA unload $slot 1; \
done;

A little script to wipe the label from a tape (takes an autochanger slot as argument)

#!/bin/bash
  
#SLOT=$1

AUTOLOADER=/dev/tape/by-id/scsi-1QUANTUM_D0H0281913_LLA

#DRIVE=/dev/tape/by-id/scsi-3500308c38ce84000-nst
#DRIVEID=0

DRIVE=/dev/tape/by-id/scsi-3500308c38ce84004-nst
DRIVEID=1

for SLOT in {1..24};
do
        #echo "new label for slot $SLOT"

        # load tape from $SLOT into the drive $DRIVEID
        mtx -f $AUTOLOADER load $SLOT $DRIVEID

        # rewind the tape present in this drive
        echo "Rewinding tape..."
        mt -f $DRIVE rewind

        # wipe the start of the tape (including the label)
        echo "Writing eof at start"
        mt -f $DRIVE weof

        # unload the tape from the drive $DRIVEID back to its original slot
        mtx -f $AUTOLOADER unload $SLOT $DRIVEID
done;

Test daemons

bareos-dir -t

Installation

sudo wget -O /etc/apt/sources.list.d/bareos.list https://download.bareos.org/bareos/release/20/xUbuntu_20.04/bareos.list
wget -q https://download.bareos.org/bareos/release/20/xUbuntu_20.04/Release.key -O- | sudo apt-key add -
sudo apt update

Everything, director + catalog etc…

sudo apt install postgresql postgresql-contrib
sudo apt install bareos bareos-database-postgresql

Only the storage part on a computer equipped with a tape drive:

sudo apt install --install-suggests bareos-storage-tape

Documentation

https://docs.bareos.org/index.html