You are reading the documentation for an outdated Corteza release. 2024.9 is the latest stable Corteza release.

Installing Corteza

Corteza with Docker Compose

Set up Docker

If Docker is already set up on the machine where you want to use Corteza, you can skip this section. If you are using a Docker version below 18.0, I strongly encourage you to update.

If you’re not sure whether you have Docker, open your console or terminal and enter:
docker -v

If the response is "command not found," download and install a Docker community edition for desktop or server or cloud that fits your environment.

Configuration files

Very base Corteza service configuration for Docker Compose consists of one or two files: obligatory docker-compose.yaml and .env file.

Environment file (.env)

The format for environment files is simple and clean with one KEY=VALUE per line. File usually has the name .env and is placed in the same directory as an application. In Docker Compose context it is placed in the same directory as docker-compose.yaml configuration file.

Environment file is used on three levels:
  1. Configuring Docker Compose itself. Implicit, as .env file in the same directory as docker-compose.yaml)

  2. Utilize values for docker-compose configuration. Implicit, as .env file in the same directory as docker-compose.yaml)

  3. Passed to configured services. Environment file(s) must be explicitly referenced by each service (env_file: [ .env ]) You can use .env or one or more environment files. See [Delaying API execution].

# Docker images version (1)
VERSION=latest (2)
1 Comment line
2 Key and value

docker-compose.yaml file

This file describes service, network and storage configuration in a human and machine readable format.

Full Docker Compose file reference documentation is available on docs.docker.com.

YAML (a recursive acronym for "YAML Ain’t Markup Language") is a human-readable data-serialization language. It is commonly used for configuration files and in applications where data is being stored or transmitted.

Environmental variables from .env can be utilized to make docker-compose.yaml file more compact, modular and simpler to change. You can define a variable (like VERSION) and then use it inside yaml file (as ${VERSION}`).

Basic setup for local demo

Provided configuration files for demo a few extra settings that enable it to run on a local environment. All services are on the same network, ports binded to host’s network, etc.

This is not optimal setup for production environment. See Nginx Proxy and Production deployment below configuration examples that are more suited for production deployment.

.env
# Version of Corteza Docker images
VERSION=2020.6

# Database connection
DB_DSN=corteza:change-me@tcp(db:3306)/corteza?collation=utf8mb4_general_ci

# Secret to use for JWT token
# Make sure you change it (>30 random characters) if
# you expose your deployment to outside traffic
AUTH_JWT_SECRET=this-is-only-for-demo-purpuses--make-sure-you-change-it-for-production

############################################################
# Only part of an documentation example

# In case you have other services running on your localhost,
# change these two numbers to an available port no.
LOCAL_DEMO_SPA_PORT=8080
LOCAL_DEMO_API_PORT=8081
LOCAL_DEMO_CRD_PORT=8082
docker-compose.yaml
version: '3.5'

services:
  db:
    image: percona:8.0
    restart: on-failure
    environment:
      # To be picked up by percona image when creating the database
      # Must match with DB_DSN settings inside .env
      MYSQL_DATABASE:      corteza
      MYSQL_USER:          corteza
      MYSQL_PASSWORD:      change-me
      MYSQL_ROOT_PASSWORD: change-me-too
    healthcheck: { test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"], timeout: 20s, retries: 10 }

  server:
    image: cortezaproject/corteza-server-monolith:${VERSION}
    restart: on-failure
    env_file: [ .env ]
    environment:
      # Informing Corredor where it he contact us
      CORREDOR_API_BASE_URL_SYSTEM:    "http://server:80/system"
      CORREDOR_API_BASE_URL_MESSAGING: "http://server:80/messaging"
      CORREDOR_API_BASE_URL_COMPOSE:   "http://server:80/compose"
      CORREDOR_ADDR:                   "corredor:${LOCAL_DEMO_CRD_PORT}"
    depends_on: [ db, corredor ]
    # Binds internal port 80 to port LOCAL_DEMO_API_PORT on localhost
    ports: [ "127.0.0.1:${LOCAL_DEMO_API_PORT}:80" ]

  corredor:
    image: cortezaproject/corteza-server-corredor:${VERSION}
    restart: on-failure
    env_file: [ .env ]
    environment:
      # Informing Corredor where it he contact us
      CORREDOR_ADDR:                   "corredor:${LOCAL_DEMO_CRD_PORT}"
    # Binds internal port to port LOCAL_DEMO_CRD_PORT on localhost
    ports: [ "127.0.0.1:${LOCAL_DEMO_CRD_PORT}:50051" ]

  webapp:
    image: cortezaproject/corteza-webapp:${VERSION}
    restart: on-failure
    depends_on: [ server ]
    environment:
      # Monolith server in the backend, all services can be found under one base URL
      MONOLITH_API: 1
      # Configure web application with API location
      API_BASEURL:  "127.0.0.1:${LOCAL_DEMO_API_PORT}"
    # Binds internal port 80 to port LOCAL_DEMO_SPA_PORT on localhost
    ports: [ "127.0.0.1:${LOCAL_DEMO_SPA_PORT}:80" ]

Some of the configuration lines in the provided YAML examples and files are written in a single line for brevity and simpler enabling/disabling (commenting-out).

Create an empty directory, with .env and docker-compose.yaml files and copy contents from the examples above. Some operating systems do not like files that start with a dot so make sure .env file is properly named.

Start all services (database, server, corredor, webapp)
docker-compose up -d
Running docker-compose ps should produce something like:
      Name                    Command                  State               Ports
-----------------------------------------------------------------------------------------
basic_corredor_1   docker-entrypoint.sh node  ...   Up             127.0.0.1:8082->50051/tcp, 80/tcp
basic_db_1         /docker-entrypoint.sh mysqld     Up (healthy)   3306/tcp, 33060/tcp
basic_server_1     /bin/corteza-server serve-api    Up             127.0.0.1:8081->80/tcp
basic_webapp_1     /entrypoint.sh                   Up             127.0.0.1:8080->80/tcp

You can see 4 services up and running. Two of them are accessible on localhost (ports 8080 and 8081).

Direct your browser to http://localhost:8080 (use another port if you changed value for LOCAL_DEMO_SPA_PORT). On first visit, you should be redirected to /auth where you can login, sign up, etc.

Create your account through the sign-up form. Corteza detects if the database is empty and auto-promotes first user to administration role.

Stopping and removing containers and date (do not ask for confirmation, stop containers if running and remove volumes)
docker-compose rm --force --stop -v

Other useful docker-compose commands:

View container output (logs) and follow output and output 20 lines from each service
docker-compose logs --follow --tail 20

Nginx Proxy

Nginx Proxy setup is not needed for local testing. Instructions from Basic setup for local demo are enough to get local demo up & running.

Nginx Proxy (docker image jwilder/nginx-proxy) is auto-configurable reverse-proxy that routes traffic from your public IP to containers on the host LetsEncrypt Nginx Proxy Companion (docker image jrcs/letsencrypt-nginx-proxy-companion) handles the automated creation, renewal and use of Let’s Encrypt certificates for proxyed Docker containers.

In the following instructions, we assume you don’t have anything similar set up on your current environment. If you have other means to provide traffic forwarding and/or SSL certificate handling, proceed with caution!

Please see Nginx Proxy and LetsEncrypt Nginx Proxy Companion Github pages to

How it works & benefits
  1. Both images mount /var/run/docker.sock (read-only) and listen to docker events (when containers start or stop)

  2. Containers (like Corteza server, and fronted application) that are exposed publicly no longer have to publish their ports on public IP

  3. No complicated firewall or network forwarding rules are needed

  4. Containers MUST (also) be on the same network as nginx-proxy (in the examples we’re using network named proxy)

  5. Nginx Proxy detects VIRTUAL_HOST on each container that comes online. Then it auto-generates configuration, reloads itself and starts forwarding HTTP traffic to that container

  6. LetsEncrypt companion detects LETSENCRYPT_HOST and starts certificate creation process with LE. It also reconfigures nginx-proxy, adds certificates and enables redirection from HTTP to HTTPS

docker-compose.yaml
version: '3.5'

services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    restart: always
    networks:
      - proxy
    ports:
      - "80:80"
      - "443:443"
    labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
    volumes:
      - ./certs:/etc/nginx/certs
      - ./htpasswd:/etc/nginx/htpasswd
      - ./vhost.d:/etc/nginx/vhost.d
      - ./html:/usr/share/nginx/html
      - ./custom.conf:/etc/nginx/conf.d/custom.conf:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro

  nginx-letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: nginx-letsencrypt
    restart: always
    depends_on:
      - nginx-proxy
    volumes:
      - ./certs:/etc/nginx/certs
      - ./vhost.d:/etc/nginx/vhost.d
      - ./html:/usr/share/nginx/html
      - /var/run/docker.sock:/var/run/docker.sock:ro

# Create network if it does not exist
networks: { proxy: { name: proxy } }
Add custom.conf Nginx configuration file next to docker-compose.yaml
# Make sure we can upload at least 200Mb files
client_max_body_size    200M;

# Add other custom configs.

#
Start both services
docker-compose up -d
Running docker-compose ps should produce something like:
      Name                     Command               State                    Ports
-----------------------------------------------------------------------------------------------------
nginx-letsencrypt   /bin/bash /app/entrypoint. ...   Up
nginx-proxy         /app/docker-entrypoint.sh  ...   Up      0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp

Production deployment

This example describes production-ready deployment. It depends on running nginx-proxy service (see Nginx Proxy)

This demo uses 2 example domains: your-demo.example.tld and api.your-demo.example.tld. You should configure your DNS, add 2 hosts and point them to the IP address (A record) or hostname (CNAME record) of the server you’re using for Corteza deployment.

.env
# We'll use this in all variables in docker-compose.yml
DOMAIN=your-demo.example.tld
VERSION=2020.6

# Database connection
DB_DSN=corteza:change-me@tcp(db:3306)/corteza?collation=utf8mb4_general_ci

# Secret to use for JWT token
# Make sure you change it (>30 random characters) if
# you expose your deployment to outside traffic
AUTH_JWT_SECRET=this-is-only-for-demo-purpuses--make-sure-you-change-it-for-production

CORREDOR_ADDR=corredor:80

# SMTP settings
# Point this to your local or external SMTP server
SMTP_HOST=smtp-server.example.tld:587
SMTP_USER=postmaster@smtp-server.example.tld
SMTP_PASS=g80jrwoihghwefhweuifhweoiufhweuiofhwuie
SMTP_FROM='"Demo" <info@your-demo.example.tld>'
docker-compose.yaml
version: '3.5'

services:
  db:
    image: percona:8.0
    restart: on-failure
    environment:
      # To be picked up by percona image when creating the database
      # Must match with DB_DSN settings inside .env
      MYSQL_DATABASE:      corteza
      MYSQL_USER:          corteza
      MYSQL_PASSWORD:      change-me
      MYSQL_ROOT_PASSWORD: change-me-too
    healthcheck: { test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"], timeout: 20s, retries: 10 }
    networks: [ internal ]
    # Uncomment to use local fs for data persistence
    # volumes: [ "./data/db:/var/lib/mysql" ]

  server:
    image: cortezaproject/corteza-server-monolith:${VERSION}
    restart: on-failure
    env_file: [ .env ]
    depends_on: [ db, corredor ]
    networks: [ proxy, internal ]
    environment:
      VIRTUAL_HOST:     api.${DOMAIN}
      LETSENCRYPT_HOST: api.${DOMAIN}
      CORREDOR_API_BASE_URL_COMPOSE:   https://api.${DOMAIN}/compose
      CORREDOR_API_BASE_URL_MESSAGING: https://api.${DOMAIN}/messaging
      CORREDOR_API_BASE_URL_SYSTEM:    https://api.${DOMAIN}/system
    # Uncomment to use local fs for data persistence
    # volumes: [ "./data/server:/data" ]

  corredor:
    image: cortezaproject/corteza-server-corredor:${VERSION}
    networks: [ internal ]
    restart: on-failure
    env_file: [ .env ]

  webapp:
    image: cortezaproject/corteza-webapp:${VERSION}
    restart: on-failure
    depends_on: [ server ]
    networks: [ proxy ]
    environment:
      MONOLITH_API:     "true"
      VIRTUAL_HOST:     ${DOMAIN}
      LETSENCRYPT_HOST: ${DOMAIN}

networks:
  internal: {}
  proxy: { external: true }

Create an empty directory, with .env and docker-compose.yaml files and copy contents from the examples above. Some operating systems do not like files that start with a dot so make sure .env file is properly named.

We advise against merging/mixing Corteza and nginx-proxy in the same directory.

It can be done but requires some experience with Docker Compose.

Make sure your nginx-proxy service is running before running. If nginx-proxy service is not started or you changed your configuration somehow, you might get error like:

ERROR: Network proxy declared as external, but could not be found. Please create the network manually using `docker network create proxy` and try again.

Inspect your configuration files and compare them with ones provied in this documentation.

Start all services (database, server, corredor, webapp)
docker-compose up -d
Running docker-compose ps should produce something like:
        Name                       Command                  State              Ports
-------------------------------------------------------------------------------------------
production_corredor_1   docker-entrypoint.sh node  ...   Up             80/tcp
production_db_1         /docker-entrypoint.sh mysqld     Up (healthy)   3306/tcp, 33060/tcp
production_server_1     /bin/corteza-server serve-api    Up             80/tcp
production_webapp_1     /entrypoint.sh                   Up             80/tcp

You can see 4 services up and running. Your services should soon (under a couple of minutes) be available on the configured domains.

Direct your browser to http://your-demo.example.tld. On first visit, you should be redirected to /auth where you can login, sign up, etc.

Create your account through the sign-up form. Corteza detects if the database is empty and auto-promotes first user to administration role.

Stopping and removing containers and date (do not ask for confirmation, stop containers if running and remove volumes)
docker-compose rm --force --stop -v

Other useful docker-compose commands:

View container output (logs) and follow output and output 20 lines from each service
docker-compose logs --follow --tail 20

Relaying inbound email to Corteza with Postfix

This quick how-to show what and where to modify in Postfix' configuration files to enable forwarding emails to Corteza.

Change /etc/postfix/main.cf:
virtual_alias_maps = pcre:/etc/postfix/virtual_alias
Add virtual alias to /etc/postfix/virtual_alias:
# Catch-all for corteza.domain.tld and redirect it to corteza_sink mailbox
/.+@corteza\.domain\.tld$/ corteza_sink
Update virtual-alias map/db file and restart postfix
postmap /etc/postfix/virtual_alias
postfix reload
Add entry to /etc/aliases
corteza_sink: "| curl --data-binary @- 'https://api.your-corteza-instance.tld/system/system/sink?content-type=email&expires=&method=POST&origin=postfix&sign=6280d530ae74f1f9c55e4dd362c9ef2094221287'"

This forwards mail for specific mailbox a curl command. Curl command then pushes that raw email to sink endpoint on Corteza API.

See email automation setup for in information how to create a signed URL

Update aliases
newaliases

Test changes

You can verify if configuration changes have desired effect by sending an email to the configured address or with a simple command line:

echo "hello corteza"|mail -s 'hello' test@corteza.domain.tld

It’s best if you try this from a different machine than the one running postfix

This should produce a new entry in your mail log (usually `/var/log/mail.log) where you can see information about received email

Log example:
postfix/smtpd[23155]: connect from some-host.tld[xxx.xxx.xxx.xxx]
postfix/smtpd[23155]: 277AF5C1B78: client=some-host.tld[xxx.xxx.xxx.xxx]
postfix/cleanup[23159]: 277AF5C1B78: message-id=<b808218e-ce41-6cbf-cb4f-be2b4cf8f776@crust.tech>
postfix/qmgr[14490]: 277AF5C1B78: from=<sender@some-host.tld>, size=1476, nrcpt=1 (queue active)
postfix/smtpd[23155]: disconnect from some-host.tld[xxx.xxx.xxx.xxx] ehlo=2 starttls=1 mail=1 rcpt=1 data=1 quit=1 commands=7
postfix/local[23160]: 277AF5C1B78: to=<corteza_sink@my-server>, orig_to=<demo@corteza.domain.tld>, relay=local, delay=0.67, delays=0.03/0.01/0/0.62, dsn=2.0.0, status=sent (delivered to command:  curl --data-binary @- 'https://api.your-corteza-instance.tld/system/sink?content-type=email&expires=&method=POST&origin=postfix&sign=6280d530ae74f1f9c55e4dd362c9ef2094221287'')
postfix/qmgr[14490]: 277AF5C1B78: removed

Forward logs to ELK

Corteza server can log requests, responses, errors and other events as JSON. By default, this logs are outputted to Docker container console. You can configure your (docker-compose) services to forward these logs to an external service.

We’ve prepared cortezaproject/elk docker image with the necessary setup that can consume and

Using GELF docker logging driver and GELF ELK server

The following example assumes you’ve taken configuration from cortezaproject/elk without any changes. You should also extend defined services in your docker-compose.yml with networks and logging section:

# ...
    networks: [ elk ] # or add elk to networks you already have
    logging:
      driver: gelf
      options:
        gelf-address: "tcp://elk:12201"
# ...