You are reading the documentation for an outdated Corteza release. 2023.9 is the latest stable Corteza release.

Extensions

We define a set of automation scripts, resources and other assets (system and Low Code configurations) as an extension.

You can use our samples, and testing template to get setup faster.

To define a new extension, firstly initialize a new node.js project and define the appropriate file structure:
/package.json
/...
/server-scripts
  /...
/client-scripts
  /auth
      /...
  /admin
      /...
  /compose
      /...
  /messaging
      /...
  /shared
      /...

/…​ indicates that you are free to structure your files as you see fit.

/auth, /admin, /compose, and /messaging contain scripts specific to each web application.

/shared contains code that client scripts can reuse.

We recommend grouping automation scripts based on their context. For example, scripts working over leads could go into a /lead/ sub-directory.

When defining utilities (and other tool functions) inside the /server-scripts or /server-scripts (excluding /shared), Corredor will treat those as automation scripts as well.

Most of the time, they will be invalid. Use the /shared directories, or define a /lib (or similar) at the root of the extension.

Automation scripts

An automation is a piece of JavaScript code that implements some business logic and conforms to this interface:

export interface Script {
  label?: string;

  description?: string;

  security?: {
    runAs?: string;
    deny: string[];
    allow: string[];
  }|string;

  triggers: Trigger[];

  exec?: ScriptFn;
}

One automation script per file.

You can use this template for the most common operations:
export default {
  label: "label goes here",
  description: "description goes here",

  // Use the ones you need, delete the rest
  triggers ({ before, after, on, at }) {
    return before('event goes here')
      .where('constraint goes here')
      // Add/remove constraints here
  },

  // remove async if you aren't doing any async operations
  // use object destructuring for args and ctx
  async exec(args, ctx) {
    // Code goes here
  },
}

There are two main categories of automation scripts; server scripts and client scripts.

Server scripts

These are executed in the Corteza Corredor server.

Use server scripts when:
  • working with sensitive data,

  • communicating with external APIs,

  • they shouldn’t be interruptible by the user.

Example use cases:
  • create additional records based on the current data,

  • send email notifications,

  • run statistic analysis.

Client scripts

These are executed in the client’s browser (user agent, if you will).

Use client scripts when:
  • we need to interact with the user,

  • we are performing data validation,

  • inserting default values.

Example use cases:
  • prompt the user to confirm the form submission,

  • validate the form submitted by the user,

  • redirect the user after they’ve submitted the form,

  • open an external webpage.

A rule of thumb — if we need to interact with the user (show notification, request confirmations), use client scripts. Else, use server scripts.

Treat client scripts as less secure (you can freely inspect their contents from the browser) and less reliable (a user can manually terminate their execution — closes the page).

Script execution

The execution function (exec) implements the business logic that the automation script should perform.

Any code that you want to execute should be directly in the execution function or referenced via importing.

The execution function’s signature looks like this:
interface ScriptFn {
  (args: exec.Args, ctx?: exec.Ctx): unknown;
}

Execution arguments

The arguments (args) differ based on the event that triggered the automation script. Refer to resources and events for a complete list.

Arguments to a client-script are provided via references to the original objects, meaning that any change to the argument parameter will reflect to the original object.

Arguments to a server-script are provided as a copy of the original object, meaning that the changes will not be reflected on the original object.

Execution context

The context (ctx) is static for all events.

ctx.console

console provides a logger. Client scripts refer to the window.console objects; server scripts refer to a Pino instance.

ctx.log

Shortcut for ctx.console.log.

ctx.$authUser

Auth user is a reference to the invoking user.

ctx.SystemAPI

API client for Corteza System interaction.

ctx.ComposeAPI

API client for Corteza Low Code interaction.

ctx.MessagingAPI

API client for Corteza Messaging interaction.

ctx.System

Corredor helper for the Corteza System.

ctx.Compose

Corredor helper for the Corteza Low Code.

ctx.ComposeUI

Corredor helper for the Corteza Low Code user interface.

ctx.Messaging

Corredor helper for the Corteza Messaging.

ctx.frontendBaseURL

Base URL used by front-end web applications. This is useful when generating URL’s that point to the Corteza applications.

Execution result

The execution result determines what should happen next.

Unknown Error

Any unknown error (other than Aborted) terminates the current script execution and prevents the event from triggering any additional automation scripts.

Aborted Error

An error with the value of 'Aborted' stops the execution of the current automation script. The event is able to trigger any additional automation scripts.

false

Same as Aborted Error.

unknown

Any other result indicates that the execution was successful and the next script can be triggered. What should happen next is relative to the automation script; see below sections for more details.

undefined and null also count as unknown.

Implicit script execution result

The following only applies for before events.

When a value is returned, that value is used instead of the original value.

Let’s look at the following example:
export default {
  // Use the ones you need, delete the rest
  triggers ({ before }) {
    return before('create', 'update')
      .where('module','contact')
  },

  exec({ $record }, ctx) {
    $record.values.FullName = `${$record.values.FirstName} ${$record.values.LastName}`
    return $record
  },
}

The above example calculates the FullName property from the FirstName and LastName. The return $record instructs that the new version of the $record should be used (the one with the FullName calculated).

If we didn’t return anything (null or undefined for example) the change would be reverted.

The return value for after events is discarded (can not be updated).

Sink script execution result

When a value is returned, that value is used as an HTTP response.

Let’s look at the following example:
export default {
  security: 'super-secret-user',

  triggers ({ on }) {
    return on('request')
      .where('request.path', '/some/extension')
      .where('request.method', 'GET')
      .for('system:sink')
  },

  exec ({ $request, $response }, { Compose }) {
    $response.status = 200
    $response.header = { 'Content-Type': ['text/html; charset=UTF-8'] }
    $response.body = `<h1>Hello World!</h1>`

    return $response
  },
}

The above example returns a simple static HTML document with a Hello World! written over it. You can use this to implement the OAuth protocol or a confirmation page.

This isn’t limited to simple HTML documents.

Just make sure that your responses are properly structured (content type, status, and body).

You could do something like this:
export default {
  security: 'super-secret-user',

  triggers ({ on }) {
    return on('request')
      .where('request.path', '/model/roi')
      .where('request.method', 'GET')
      .for('system:sink')
  },

  async exec ({ $request, $response }, { Compose }) {
    $response.status = 200
    $response.header = { 'Content-Type': ['application/json'] }

    let pl = {}
    try {
      pl.product = await fetchProduct($request.query.productCode[0])
      pl.roi = await calculateRoi(pl.product)
    } catch ({ message }) {
      $response.status = 500
      $response.body = JSON.stringify({
        error: message,
      })
      return $response
    }

    $response.body = JSON.stringify(pl)
    return $response
  },
}

DevNote include other trigger types.

Automation triggers

Automation triggers (trigger) let you define when the script should be executed (along with some extra bits).

The Corredor server evaluates the triggers in an isolated context, outside of any imports or variable definitions. The following example will not work:

const MOD_NAME = 'Contact'

export default {
  triggers ({ on }) {
    return on('manual')
      .for(MOD_NAME) // <- we're referencing the constant here
  },
  exec (args, ctx) {...},
}
Table 1. Available trigger types:

Explicit

These are explicitly triggered by pressing a button.

Use explicit triggers when you wish to manually initiate something, such as an OAuth authentication flow, redirection to an external resource, or data export.

Implicit

These are implicitly triggered based on system events.

Use implicit triggers when you wish to automatically do something when another thing occurs; such as sending an email when you register a new user; or adding a changelog entry when the content changes.

Refer to resources and events for a complete list of events you can listen for.

Deferred

Server script only, requires explicit security context

The system triggers these sometime in the future; either periodically (define with cron expressions), or at a timestamp (use ISO 8601; this one: YYYY-MM-DDThh:mm:ssZ).

A sample for an interval, and a timestamp.

Use deferred triggers when you want to repeat something or do something in the future; such as recurring payments or sending holiday newsletters to your subscribers.

The scheduler acts once per minute, so that is the maximum accuracy Corteza supports.

Sink

Server script only, requires explicit security context

These are triggered by the system when it receives a request; either HTTP, or email.

Use sink triggers when you want to respond to requests; such as webhooks for external services or custom API endpoints. For example capturing data from external forms, tracking external document changes, and capturing payments.

We recommend you use the REST API whenever possible.

Defining the resource

The defined resource specifies the resource the automation script is executed for; for example a record, module, and user. Refer to resources and events for a complete list of available resources.

This is done by specifying a single .for(…​) call.

It should look like this:
.for('ResourceGoesHere')
An example of specifying a resource:
triggers ({ before }) {
  return before('create', 'update')
    // This will trigger for a compose record resource
    .for('compose:record')
},

Defining constraints

Constraints let you define precisely when the automation script should execute.

This is done by chaining a series of .where(…​) calls.

Each call should look like this:
.where(
  resourceAttribute,(1)
  comparator|value,(2)
  [value],(3)
)
1 The resource attribute that the constraint should check; resource constraints for a complete list.
2 When 2 arguments are provided, this is the value to check against; when 3 arguments are provided, this is the comparison operator.
3 When provided, this is the value to check against.
An example of chaining constraints:
triggers ({ before }) {
  return before('create', 'update')
    .for('compose:record')
    // vv these two vv
    .where('module', 'Lead')
    .where('namespace', 'crm')
},
Table 2. Available comparison operators:

Equals (default)

eq, =, ==, ===

All of the above operators work the same. You are free to use whichever you prefer.

If you’re checking for equality, you can omit the operator.

Not equals

not eq, ne, !=, !==

All of the above operators work the same. You are free to use whichever you prefer.

Partial equals

like

Supported wildcards:
  • one or more characters: %, *,

  • one character: _, ?.

Partial not equals

not like

Supported wildcards:
  • one or more characters: %, *,

  • one character: _, ?.

Regex equals

~

Regex not equals

!~

Automation trigger conventions

Use object destructuring

Object destructuring helps you shorten the entire thing.

For example:
// Instead of using:
triggers (t) {
  return t.after('create')
    .for('compose:record')
    .where('module', 'super_secret_module')
},

// you can do:
triggers ({ after }) {
  return after('create')
    .for('compose:record')
    .where('module', 'super_secret_module')
},

// Neat, right?!
Make trigger constraints as strict as possible

Having loose constraints can cause unwanted issues when there are multiple namespaces under the same instance. Two namespaces could easily define a module with the same handle, which would cause both of them to execute the given script.

For example:
// Instead of using:
triggers (t) {
  return t.after('create')
    .for('compose:record')
    .where('module', 'super_secret_module')
},

// you can do:
triggers ({ after }) {
  return after('create')
    .for('compose:record')
    .where('module', 'super_secret_module')
    .where('namespace', 'super_secret_namespace')
},

Security Context

The security context lets you control script execution based on the invoking user and their roles.

Invoking user

The invoking user is someone who performed an action that triggered the script execution.

For example; you pressed a button, so you are the invoking user.

Security context lets you define the invoking user, to permit operations over resources that the actual user may not have access to. For example, you can permit regular users to access records via automation scripts, but not directly via the record list.

Deferred and sink scripts require you to specify the security context as the invoker is not known.

An example of defining an invoking user:
// This security context forces the system to use some-user-identifier-here when executing the script
export default {
  trigger (t) {...}

  security: 'some-user-identifier-here',

  exec (args, ctx) {...}
}

You can use the user’s handle, email or ID as the some-user-identifier-here value.

Forcing the invoking user is only available for server scripts.

We suggest you create a new system user that is responsible for the script execution. For example, our DocuSign extension requests a new ext_docusign user.

Restricting script execution

Security context lets you prevent specific users from performing specific operations. For example, you can prevent regular users from signing documents or sending quotes.

Use these properties when defining the context:
  • allow: specifies what roles are permitted to trigger the automation script.

  • deny: specifies what roles are not allowed to trigger the automation script.

If a user is not allowed to trigger an explicit script (a button), the button is shown as disabled.

An example of permitting access:
// This security context only permits the administrator and superuser to trigger the script.
// Other roles will not be able to trigger it.
export default {
  trigger (t) {...}

  security: {
    allow: ['administrator', 'superuser'],
  },

  exec (args, ctx) {...}
}
An example of denying access:
// This security context permits all roles but the client and lead.
export default {
  trigger (t) {...}

  security: {
    deny: ['client', 'lead'],
  },

  exec (args, ctx) {...}
}

API clients

API clients define an SDK to work with Corteza API. They are provided in the execution context (here: exec(args, ctx) — see Script execution).

Available Corteza API clients:

All API operations are achievable via these clients.

DevNote extract methods here?

Corredor helpers

Corredor helpers implement the most common automation script operations, such as creating new records, registering users, and sending emails. They are provided in the execution context (here: exec(args, ctx) — see Script execution).

Corredor helpers are context-aware, meaning that they can automatically determine some arguments. For example; when creating a record, Corredor helpers will know what namespace, module, and record you are using.

Available Corteza Corredor helpers:

Corredor helpers can be used outside of your automation scripts. If your application needs to interact with Corteza, you can use them.

DevNote extract methods here?

Modifying existing extensions

Create a modified copy

To modify the extension:
  1. copy the extension source (clone the repository, or copy the files),

  2. modify the source as you see fit,

  3. deploy your version instead of the original version.

Overwrite the scripts

When the Corredor processes your automation scripts, they are assigned an auto-generated name, generated from the source file path.

For example:
# The CRM extension
/ server-scripts
  / crm
    / Lead
      / SetLabel.js

The SetLabel.js script is assigned /server-scripts/crm/Lead/SetLabel.js:default as the name.

To overwrite the SetLabel.js script, you must define a script that will be assigned the same name (has the same path).

For example:
# The CRM extension
/ package.json
/ node_modules
/ server-scripts
  / crm
    / Lead
      / SetLabel.js # <- We're targeting this one
      / AnotherScript.js

# Your extension
/ package.json
/ node_modules
/ server-scripts
  # To overwrite something in the CRM extension
  / crm
    / Lead
      / SetLabel.js # <- This version will replace the CRM version

  # The rest of your code goes here
  / extension
    / Lead
      / SomeScript.js

For this to work, you must make sure that your extension is included after the extension you wish to modify.

For example:
# This will NOT work; the CRM is included after
CORREDOR_EXT_SEARCH_PATHS="/your-ext:/crm"

# This will work; the CRM is included before
CORREDOR_EXT_SEARCH_PATHS="/crm:/your-ext"

Using node modules

Corteza Corredor supports the use of external node modules, both on the server-scripts and client-scripts.

Corteza Corredor uses the Yarn package manager.

To add a new node module either:
  • manually insert it into the package.json file,

  • run the yarn add NAME_GOES_HERE.

When you register and load the extension in the Corredor server, it will automatically resolve any changed dependency from the package.json file.

We’re observing some anomalies when running Yarn inside a docker container.

If you’re getting an error message similar to the one below, it means that Yarn was not able to install the dependencies. This error occurs when Yarn is unable to store its cache.

{
  "type": "error",
  "data": "https://registry.yarnpkg.com/rxjs/-/rxjs-6.6.3.tgz: Extracting tar content of undefined failed, the file appears to be corrupt: \\"ENOSPC: no space left on device, write\\""
}
To fix this, you need to:
  1. ssh into the Corredor container (docker-compose exec -u root corredor sh),

  2. cd into the mounted volume,

  3. run yarn --cache-folder /tmp.

The dependencies should now be installed and available for use. The above yarn command manually runs the install process, discarding the cache.

Node modules can then be used just like anywhere else.

Example package.json:
{
  "dependencies": {
    "axios": "^0.18.0"
  }
}
Example usage from automation scripts:
import axios from 'axios'

export default {
  // ...

  async exec() {
    await axios.get(...)

    // ...
  }
}

Different extensions do not share their dependencies. If two extensions use the same dependency, they both need to define it.

Testing

Extensions are essentially Node.js projects with some extra bits; meaning that the extensions can be tested in the same way.

You are free to use any testing framework and any testing methodology you wish.

We usually use Chai, Mocha, Nyc, and Sinon.

DevNote provide some more examples for other frameworks?

An example setup

The file structure (yes, yes; the sources are below):
/ .gitignore
/ .eslintrc.js
/ .mocharc.js
/ package.json

/ server-scripts
    / Sample.js
    / Sample.test.js
    / ...
/ client-scripts
    / ....
gitignore:
.vscode
node_modules
.nyc_output
coverage
yarn-error.log
eslintrc.js:
module.exports = {
  root: false,
  env: {
    node: true,
    es6: true,
  },
  extends: [
    'standard',
  ],
}
mocharc.js:
module.exports = {
  require: [
    'esm',
  ],
  'full-trace': true,
  bail: true,
  recursive: true,
  extension: ['.test.js'],
  spec: [
    'client-scripts/**/*.test.js',
    'server-scripts/**/*.test.js',
  ],
  'watch-files': [ 'src/**' ],
}
package.json:
{
  "scripts": {
    "lint": "eslint {server-scripts,client-scripts}/**/* --ignore-pattern *.test.js",
    "test:unit": "mocha",
    "test:unit:cc": "nyc mocha"
  },
  "devDependencies": {
    "chai": "^4.2.0",
    "eslint": "^6.8.0",
    "eslint-config-standard": "^14.1.0",
    "eslint-plugin-import": "^2.18.2",
    "eslint-plugin-node": "^10.0.0",
    "eslint-plugin-promise": "^4.2.1",
    "eslint-plugin-standard": "^4.0.1",
    "esm": "^3.2.25",
    "mocha": "^7.0.1",
    "nyc": "^14.1.1",
    "sinon": "^8.1.1"
  },
  "nyc": {
    "all": true,
    "reporter": [
      "lcov",
      "text"
    ],
    "include": [
      "client-scripts/**/*.js",
      "server-scripts/**/*.js"
    ],
    "exclude": [
      "**/*.test.js"
    ],
    "check-coverage": true,
    "per-file": true,
    "branches": 0,
    "lines": 0,
    "functions": 0,
    "statements": 0
  }
}
Sample.js
export default {
  /* istanbul ignore next */
  trigger ({ before }) {
    return before('create')
  },

  exec () {
    return 'Hello World!'
  }
}

Note this part:

// vv this line here vv
/* istanbul ignore next */
trigger ({ before }) {
  return before('create')
},

istanbul ignore next excludes the next function from the coverage report.

Sample.test.js
import { expect } from 'chai'
import Sample from './Sample'

describe(__filename, () => {
  describe('Sample exec result', () => {
    it('should return a string', () => {
      expect(Sample.exec()).to.eq("HelloWorld")
    })
  })
})
The above package.json defines three scripts:
  • lint: lint the code using the default ES6 standard (can be configured; see here),

  • test:unit: unit test the code with your .test.js files (can be configured in the .mocharc.js file),

  • test:unit:cc: unit test the code and return a code coverage report.

The code coverage report gets generated into the coverage directory.

Inspect the coverage/lcov-report directory for a HTML report.

I usually use the http-server package to help with this, but a simple "Open in <browser name here>" will do.

http-server coverage/lcov-report

DevNote: provide some complex examples using Sinon and promises.

Deploying extensions

Setting up

To use the extension, it must be available to the Corredor server, either locally or on the server.

If you are running the Corredor without Docker (from source code), you can skip any Docker related steps.

We’ll assume that:
  • your current working directory is where your extension is,

  • your server deploy directory is /opt/deploy/test-project,

  • your file structure looks like this (where your Corteza is running):

data/
docker-compose.yml
.env
To use the extension:
  1. Create a new directory for the extension; we’ll name it corredor, but the name doesn’t matter.

  2. Somehow transport the extension source files into the newly created directory (see below sections for tips).

  3. Add a new volume to the docker-compose.yaml file that will contain the extension. For example, under the corredor service, volumes: [ "./corredor:/corredor/test-extension", …​other volumes you might have…​ ]

  4. Edit the .env (CORREDOR_EXT_SEARCH_PATHS variable) file to register the new extension. For example CORREDOR_EXT_SEARCH_PATHS=/extensions:/extensions/*:/corredor/test-extension.

  5. Reload the configurations (docker-compose up -d).

At the end, your file structure should look like this:

data/
docker-compose.yml
.env
corredor/
  test-extension/
    server-scripts/
      /...
    client-scripts/
      /...

CORREDOR_EXT_SEARCH_PATHS can contain multiple paths separated by colon (:).

You can use docker-compose logs -t --tail 100 -f corredor to see if the extension was registered and processed correctly.

Upload using git

If you are using git and a repository, we suggest you use that. Clone the repository onto your server (into the volume mentioned above). You can then pull the changes whenever the source code changes.

If it is a private repository, make sure that your git client on your server has access to it.

Upload manually

You can use scp, rsync, or any other client. Upload the extension’s source into the volume mentioned above.

I usually use rsync as it makes things a bit easier.

An example rsync command following the above assumptions; make sure to change the parameters:

rsync -av -e ssh --exclude="node_modules/" ./ SSH_USERNAME_HERE@ssh.remote.tld:/opt/deploy/test-project/corredor --delete;

Debugging

DevNote: Add some insight in debugging failing scripts.

Why is this script not valid?

An automation script is valid if:
  • it is defined in a .js file,

  • it is located under client-scripts or server-scripts,

  • it defines an export default {…​},

  • defines at least one valid trigger,

  • defines a security context if it the script is a sink or deferred,

  • conforms to the script signature.

In the case of a client-script, make sure that the file structure is appropriate.

Why can’t I see my scripts?

Check:
  • that the Corredor container has access to the extension,

    • either via an existing volume, or a new volume.

  • that you’ve uploaded your source files to your server,

  • that you’ve reloaded your containers.

If you’re registering a new volume, you must use docker-compose up -d

If you’re using an existing volume, you can use docker-compose restart