Task YAML Structure
Task card mental model
A Jarvis-HEP run does three things:
- Draw a parameter point.
- Run a workflow that turns that point into observables.
- Compute a
LogLikelihoodfrom those observables.
YAML structure (overview)
New to YAML? Start here: YAML format overview
Quick checklist (required vs optional)
- Required:
Scan,Sampling - Recommended:
EnvReqs - Choose one workflow backend:
CalculatorsorOperas - Optional:
LibDeps,Utils
A typical card is organized like this:
Scan: # run name + output location
Sampling: # how points are drawn + objective
LibDeps: # shared backend deps installed once (optional)
Calculators: # external-program workflow (optional)
Operas: # in-process workflow (optional)
EnvReqs: # platform + dependency contract (recommended)
Utils: # helper functions (optional)
Scan (run location and output layout)
Scan defines the run name and where outputs are written.
Minimal (copy-paste)
Scan:
name: "MSSM_Run"
save_dir: "&J/outputs"
For a standalone project, Jarvis writes outputs under the project root using <TASK-NAME> = Scan.name:
&J/outputs/<TASK-NAME>/
Path placeholders and resolution: Placeholder in path resolution
Common options
Scan:
name: "MSSM_Run"
save_dir: "&J/outputs"
sample_directory:
limit: 200
width: 6
archive_samples: true
sample_directory: configures how sample data is stored or displayed.limit: 200: caps the number of samples kept in the directory at 200.width: 6: sets the display width to six items per row or column.archive_samples: true: archives samples that exceed retention limits, removing them from active view.
Output directory layout
The output layout follows the CLI contract: Command line tools
Common locations:
&J/outputs/<TASK-NAME>/SAMPLE/: per-sample working files&J/outputs/<TASK-NAME>/DATABASE/: structured outputs and converted products&J/logs/<TASK-NAME>/: run logs&J/images/<TASK-NAME>/: flowchart and plots
Sampling (how points are generated)
High-level shape (reference):
Sampling:
Method: "Random" # sampler name
Variables: [] # scan variables
Bounds: {} # sampler-specific controls (optional)
LogLikelihood: [] # objective definition
LibDeps (shared external backends)
LibDeps lists external program packages that are not part of the per-sample workflow.
Use it for backends that you install once (per machine or per environment) and then reuse across scans.
High-level shape (conceptual):
LibDeps:
# shared backend packages
Details: Library dependencies
Calculators (external executables backend)
High-level shape (conceptual):
Calculators:
# one or more calculators / steps
Details: Calculators
Operas (in-process operators backend)
High-level shape (conceptual):
Operas:
# modules/operators and mappings
Details: Operas
EnvReqs (platform and dependencies)
High-level shape:
EnvReqs:
OS: []
Python: {}
Check_default_dependences: {} # optional
Details: Environment Requirements
Utils (helper functions)
Common uses: interpolation tables, reusable functions used by expressions.
Details: (if you have a Utils page later, link it here)
Expressions and I/O
- Symbolic expressions used in likelihoods and mappings: Symbolic Expression
- Input and output file formats and conventions: IO files