Backups

We recommend backing up your database and uploaded files regularly. It is also highly recommended to backup before you upgrade to a more recent Corteza version.

You can define a cron job that backups your data to external storage.

DevNote add some example setups, as we have now with a CRON job?

Reducing Backup Size

For some cases, you can omit specific database tables which allow you to reduce the size of the backup. This reduction also comes in handy when you wish to migrate your production database to your local instance.

auth_sessions

The auth_sessions table stores user’s authentication sessions. If you omit this table, the users will have to re-login after the database is restored. The auth_sessions table can safely be omitted in any case.

credentials

The credentials table stores authentication related secrets such as passwords, reset tokens, and email confirmation tokens. If you omit the credentials table all issued credentials will need to be re-issues requiring your users to reset their passwords, re-send the password reset emails, and re-send the email confirm emails. The credentials table can can safely be omitted in any case where some cases might recommend for it to be omitted.

auth_oa2tokens

The auth_oa2tokens table stores access tokens issues by the Corteza server to web applications. If you omit the auth_oa2tokens table all of the access tokens will be invalidated and will need to be re-issued. The auth_oa2tokens table can safely be omitted in any case where some cases might recommend for it to be omitted.

If you invalidate access tokens all of the authenticated web applications must re-authenticate. This could cause issues for cases where the web application only re-generates tokens when they are scheduled to expire.

actionlog

The actionlog table stores events the Corteza server considered significant, such as creating users, registering auth clients, and looking up records. If you omit the actionlog table the action history will be lost. The actionlog table can safely be omitted for development related cases.

automation_sessions

The automation_sessions table stores metadata regarding workflow execution such as what step the workflow is on, what parameters were passed to the workflow, and the outcome of the execution. If you omit the automation_sessions table the workflow execution history will be lost and any prompted or suspended workflows will not complete. The automation_sessions table can safely be omitted for development related cases.

compose_record

The compose_record table stores record metadata created in your Corteza Low Code applications. If you omit the compose_record table all of the records created for your Low Code applications will be lost. The compose_record table can be omitted if you want to backup the system structure but omit all of the data.

compose_record_value

The compose_record_value table stores record values created in your Corteza Low Code applications. If you omit the compose_record_value table all of the records created for your Low Code applications will be lost. The compose_record_value table can be omitted if you want to backup the system structure but omit all of the data.

queue_messages

The queue_messages table stores the messages passed into the messaging queue. If you omit the queue_messages table all of the messages passed into the messaging queue will be lost. The queue_messages table can safely be dropped for most cases.

resource_activity_log

The resource_activity_log table stores the history of resource changes such as record value changes. If you omit the resource_activity_log table the history of all resources will be lost. The resource_activity_log table can safely be omitted for development cases.

MySQL Database

If you’re using a different database engine, refer to their documentation on how to perform backups.

Backup

Refer to the reducing backup size section to see what tables you can omit for your use-case. An example command which omits specific tables is available below.

We recommend you use the mysqldump tool. It’s builtin into the db container (percona:8.0 image).

Do not attempt to copy raw database files to perform a backup. It might lead to corrupted data.

By default, mysqldump locks the tables when you run the export. Table locks might cause issues when running in production, so do keep this in mind.

Database dump command:
# This dumps the entire database and place it in the dump.sql file.
docker-compose exec -T \
    --env MYSQL_PWD=your-password db \
    mysqldump your-db-name --add-drop-database -u your-username > dump.sql

# This dumps the database without actionlog, automation sessions, and resource activity log
# These are generally the largest and can safely be omitted.
docker-compose exec -T \
    --env MYSQL_PWD=your-password db \
    mysqldump your-db-name --add-drop-database --ignore-table=dbname.actionlog --ignore-table=dbname.automation_sessions --ignore-table=dbname.resource_activity_log -u your-username > dump.sql

If you’ve changed the database service name (db) inside your docker-compose.yaml, make sure to change it in the above command.

Restoration

We recommend that Corteza server is shut-down until the restore procedure finishes.

Database restore command:
# This restores the database based on the dump.sql file.
docker-compose exec -T \
    --env MYSQL_PWD=your-password db \
    mysql your-db-name -u your-username < dump.sql

If you’ve changed the database service name (db) inside your docker-compose.yaml, make sure to change it in the above command.

PostgreSQL Database

Backup

Refer to the reducing backup size section to see what tables you can omit for your use-case. An example command which omits specific tables is available below.

We recommend you use the pg_dumpall or pg_dump tool. pg_dumpall is a utility for writing out ("dumping") all PostgreSQL databases of a cluster into one script file. The script file contains SQL commands that can be used as input to psql to restore the databases. It does this by calling pg_dump for each database in a cluster.

Do not try to copy raw database files to perform a backup. It might lead to corrupted data.

By default, pg_dump locks the tables when you run the export. Table locks might cause issues when running in production, so do keep this in mind.

Database Dump Command:
# This dumps all databases and place them in the dump.sql file.
docker-compose exec db \
    pg_dumpall -c -U your-username > dump.sql

# This dumps the entire database and place it in the dump.sql file.
docker-compose exec db \
    pg_dump -d your-db-name -c -U your-username > dump.sql

# To reduce the size of the sql,
# This dumps all databases and place them in the dump.gz file.
docker-compose exec db \
    pg_dumpall -c -U your-username | \
    gzip > /var/data/postgres/backups/dump.gz


# This dumps the database without actionlog, automation sessions, and resource activity log
# These are generally the largest and can safely be omitted.
docker-compose exec db \
    pg_dump -T corteza.actionlog -T corteza.automation_sessions -T corteza.resource_activity_log -c -U your-username corteza > dump.sql

If you’ve changed the database service name (db) inside your docker-compose.yaml, make sure to also change it in the above command.

Restoration

It is recommended that Corteza server is shut-down until the restore procedure finishes.

Database Restore Command:
# This restores the database based on the dump.sql file.
cat dump.sql | \
    docker-compose exec db psql -U your-username

# This restores a specific database based on the dump.sql file.
cat dump.sql | \
    docker-compose exec db psql -U your-username -d your-db-name

# To restore a compressed sql,
# This restores the database based on the dump.gz file.
gzip < dump.gz | \
    docker-compose exec db psql -U your-username

If you’ve changed the database service name (db) inside your docker-compose.yaml, make sure to change it in the above command.

Files

Backup

Without object storage service like Min.io, uploaded files are stored directly on the filesystem. Corteza server stores data to the /data directory (if not configured differently with *_STORAGE_PATH environmental variables).

You can use any standard file management tools to make a backup copy of the files.

Compressing files with the tar command:
# This compresses all your uploaded files into the backup.tar.bz2 archive,
tar -cjf backup.tar.bz2 data/server/

Restore

Uncompromising files from the archive with the tar command:
# This restores your backup.tar.bz2 archive
tar -xjf backup.tar.bz2