Pipelines

Overview

Specifications for nf-core pipelines

Recommendations

Bioconda

Package software using bioconda and biocontainers.

Build with community

Package software using bioconda and biocontainers.

Cloud compatible

Pipelines should be tested on cloud computing environments.

Custom containers

What to do with custom containers that are hosted on docker.io or ghcr.io

DOIs

Pipelines should have digital object identifiers (DOIs).

File formats

Use community accepted modern file formats.

Publication credit

Recognition for nf-core pipeline publications

Testing

Use nf-test to validate pipeline

Requirements

Acknowledgements

Pipelines must properly acknowledge prior work.

CI testing

Pipelines must run CI tests.

Community owned

Pipelines are owned by the community.

Docker

Software must be bundled using Docker and versioned.

Docs

Pipeline documentation must be hosted on the nf-core website

Git branches

Use `master|main`, `dev` and `TEMPLATE`.

Identity branding

Primary development must on the nf-core organisation.

Keywords

Excellent documentation and GitHub repository keywords.

Linting

The pipeline must not have any failures in the `nf-core pipelines lint` tests

Minimum inputs

Pipelines should be able to run with as little input as possible.

MIT license

Pipelines must open source, released with the MIT license.

Nextflow

Pipelines must be built with Nextflow

Parameters

Strive to have standardised usage.

RO Crate

Pipelines must come with their own Research Object (RO) Crate

Semantic versioning

Pipelines must use semantic versioning.

Single command

Pipelines should run in a single command.

Use the template

All nf-core pipelines must be built using the nf-core template.

Workflow name

Names should be lower case and without punctuation.

Workflow size

Not too big, not too small

Workflow specificity

There should only be a single pipeline per data / analysis type.