Using S3 object store#
By default piler stores email files on the local filesystem under /var/piler subdirectories, thus requiring lots of disk space.
Piler is able to store files on S3 compatible object stores, eg. Amazon S3, Wasabi, Exoscale, etc. Or you can even set up your own S3 object store with 3rd party software like Minio. Note that you should decide where to store the email files before starting archiving. Piler doesn’t support a mixed mode where some emails are stored locally, and some emails on an S3 object store.
When using S3 storage the piler daemon writes all files to /var/piler/s3 directory. Then a piler service processes these files, and upload them to the S3 store.
Piler will upload all files for a given tenant to a single bucket only.
Enable the S3 storage#
To enable using an S3 store, set the following in /etc/piler/piler.conf:
s3_hostname=your-s3-host.domain.com:9000
s3_region=us-east-1
s3_access_key=youraccesskey
s3_secret_key=yoursecretkey
s3_bucket_prefix=
s3_dir=/var/piler/s3
s3_use_subdirs=1
s3_threads=10
s3_secure=1
s3_hostname should be the S3 hostname and port. Eg. Minio uses port 9000. For Wasabi, it might be s3.us-west-1.wasabisys.com.
Set s3_secure=0 if your S3 host does NOT support TLS.
By default s3_bucket_prefix is empty. If you need to specify one, then make sure it conforms to domain names.
Notes: you must NOT use the dash (-) character, however you may use a trailing dot, eg.
s3_bucket_prefix=someprefix.
S3 settings for oracle cloud:
s3_access_key=youraccesskey
s3_bucket_prefix=
s3_dir=/var/piler/s3
s3_hostname=frnsjgsfvvgl.compat.objectstorage.eu-frankfurt-1.oraclecloud.com
s3_region=eu-frankfurt-1
s3_secret_key=yoursecretkey
s3_secure=1
s3_threads=10
s3_use_subdirs=1
S3 settings for Cloudflare bucket using EU jurisdiction:
s3_access_key=xxxxxxxxxxxxxxxxxx
s3_bucket_prefix=
s3_dir=/var/piler/s3
s3_hostname=zzzzzzzzzzzzzzzzz.eu.r2.cloudflarestorage.com
s3_region=auto
s3_secret_key=yyyyyyyyyyyyyyyyyyyyyyyyyyyyy
s3_secure=1
s3_threads=10
s3_use_subdirs=1
S3 settings for Backblaze bucket using EU-central:
(Note: Backblaze doesn't accept dot (.) in the bucket name!)
s3_access_key=xxxxxxxxxxxxxxxxxx
s3_bucket_prefix=
s3_dir=/var/piler/s3
s3_hostname=s3.eu-central-003.backblazeb2.com
s3_region=us-east-1
s3_secret_key=yyyyyyyyyyyyyyyyyyyyyyyyyyyyy
s3_secure=1
s3_threads=10
s3_use_subdirs=1
S3 settings for Hetzner object store at fsn1 location:
s3_access_key=xxxxxxxxxxxxxxxxxx
s3_bucket_prefix=
s3_dir=/var/piler/s3
s3_hostname=<your hetzner id>.fsn1.your-objectstorage.com
s3_region=us-east-1
s3_secret_key=yyyyyyyyyyyyyyyyyyyyyyyyyyyyy
s3_secure=1
s3_threads=10
s3_use_subdirs=1
Update the UI config#
Put the S3 configuration to the UI env file as well. This is for a minio installation on port 9000 with no TLS configured:
RETRIEVER_METHOD=s3
S3_ENDPOINT=minio:9000
S3_ACCESS_KEY=root
S3_SECRET_KEY=example156
S3_USE_SSL=false
S3_REGION=us-east-1
Enable the the piler-s3 service#
cd /etc/systemd/system
ln -sf /usr/libexec/piler/piler-s3.service .
systemctl daemon-reload
systemctl enable piler-s3
systemctl start piler-s3
S3 statistics#
To allow S3 statistics you need to install the minio client:
curl -o /usr/local/bin/mc https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x /usr/local/bin/mc
Then create a config file for minio:
su - piler
mkdir -p /var/piler/.mc/
Create /var/piler/.mc/config.json. In the below example we used the following parameters (be sure to use your own settings):
s3_hostname=https://zzzzzzzzzzzzzzzzz.eu.r2.cloudflarestorage.com
s3_access_key=aabbaabbaabb
s3_secret_key=deadbeef0123456789abcdef
/var/piler/.mc/config.json:
{"version":"10", "aliases": {"minio": {"url":"https://zzzzzzzzzzzzzzzzz.eu.r2.cloudflarestorage.com","accessKey":"aabbaabbaabb", "secretKey": "deadbeef0123456789abcdef", "api": "S3v4", "path": "auto"}}}
Then finally add the following to piler's crontab entries:
30 * * * * /usr/libexec/piler/s3-bucket-stat.sh minio
This will update the s3_bucket_stats table in the mysql piler database every hour. The result looks like the following. These details will be displayed at the tenant listing.
MariaDB [piler]> select * from s3_bucket_stat;
+---------+----------+---------+---------------------+
| bucket | size | objects | t |
+---------+----------+---------+---------------------+
| fictive | 20481360 | 2978 | 2024-02-06 10:30:08 |
+---------+----------+---------+---------------------+
1 row in set (0.001 sec)