Skip to content

Getting Started

Python
from workflow.definitions.work import Work

work = Work(
    pipeline="sample-pipeline",
    site="local",
    user="username",
    function="workflow.examples.function.math",
    parameters={"alpha": 1, "beta": 2},
)

At a minimum, the Work object requires the pipeline, site, and user parameters. Additional lifecycle parameters are automatically added to the Work object as it progresses through the workflow system.

Deposit Work

To deposit the Work object into the workflow system, simply run,

Python
work.deposit()

Perform Work

Work is usually performed by a user provided python function. This function can be run anywhere, but it is recommended to run it in a container. In future releases of the workflow system, we will provide a container orchestration layer to run the function at any scale or any telescope site and also at Compute Canada sites.

To perform a unit of work, the user needs to withdraw the work, perform it, update it with the results and finally deposit it back to the workflow system. This can be done manually by the user, or it can also be done automatically by the workflow system using the workflow run CLI command.

User Provided Function

Lets say we have the following function, that you wish to run,

Python
# Function that performs the work.
# We assume that this function is availaible as an import.
def add(a: int, b: int) -> int:
    result = a + b
    return result

Manual Withdraw and Deposit

Python
from chime_frb_api.workflow import Work

work = Work.withdraw(pipeline="sample-pipeline")
result = add(**work.parameters)
work.results = result
work.status = "success"
work.update()

Automatic Withdraw and Deposit

Bash
workflow run add sample-pipeline

The simplest and preferred way to run a user function is throught the workflow run CLI Command. This command will automatically withdraw the work, perform the work by passing the parameters field to the user function, and then deposit the results returned back into the workflow system.

CLI Command: workflow run

Bash
workflow run {pipeline-name} {module.submodule.function}

where,

  • pipeline-name: the name of the pipeline. This defines what work will be withdrawn by the running pipeline.

  • module.submodule.function: the full python specficication of the pipeline function that will be ran.

The pipeline CLI assumes that the pipeline function follows the CHIME/FRB pipeline interface specification. That is, it takes keyword arguments as inputs and outputs a results dictionary, a products list of paths to output files, and a plots lists of paths to output plots.

Example for header localization persistent container:

Bash
poetry workflow run header-localization frb_l2l3.utils.header_localization.main

It is strongly recommended to use the maximum number of fields in the work object. This will make it easier not only to manage the workflow, but also to manage retrieve the results of the work once completed.

Python
work.event = [12345, 67890]
work.user = "some-user"
work.tag = ["some-tag", "some-other-tag"]
work.group = ["some-working-group", "some-other-working-group"]
work.site = "chime"
Python
work.results = {"some-parameter": "some-value"}
work.plots = ["some-plot.png"]
work.products = ["some-product.fits"]
work.status = "success"

For a full list of parameters that can be set in the Work object see Work Object.