Working with Files¶
Scientific analysis workflows often involve processing large numbers of files. Nextflow provides powerful tools to handle files efficiently, helping you organize and process your data with minimal code.
Learning goals¶
In this side quest, we'll explore how Nextflow handles files, from basic file operations to more advanced techniques for working with file collections. You'll learn how to extract metadata from filenames, which is a common requirement in scientific analysis pipelines.
By the end of this side quest, you'll be able to:
- Create Path objects from file path strings using Nextflow's
file()method - Access file attributes such as name, extension, and parent directory
- Handle both local and remote files transparently using URIs
- Use channels to automate file handling with
channel.fromPath()andchannel.fromFilePairs() - Extract and structure metadata from filenames using string manipulation
- Group related files using pattern matching and glob expressions
- Integrate file operations into Nextflow processes with proper input handling
- Organize process outputs using metadata-driven directory structures
These skills will help you build workflows that can handle different kinds of file inputs with great flexibility.
Prerequisites¶
Before taking on this side quest, you should:
- Have completed the Hello Nextflow tutorial or equivalent beginner's course.
- Be comfortable using basic Nextflow concepts and mechanisms (processes, channels, operators)
0. Get started¶
Open the training codespace¶
If you haven't yet done so, make sure to open the training environment as described in the Environment Setup.
Move into the project directory¶
Let's move into the directory where the files for this tutorial are located.
You can set VSCode to focus on this directory:
Review the materials¶
You'll find a simple workflow file (file_operations.nf) and a data directory containing some example data files.
.
├── count_lines.nf
├── data
│ ├── patientA_rep1_normal_R1_001.fastq.gz
│ ├── patientA_rep1_normal_R2_001.fastq.gz
│ ├── patientA_rep1_tumor_R1_001.fastq.gz
│ ├── patientA_rep1_tumor_R2_001.fastq.gz
│ ├── patientA_rep2_normal_R1_001.fastq.gz
│ ├── patientA_rep2_normal_R2_001.fastq.gz
│ ├── patientA_rep2_tumor_R1_001.fastq.gz
│ ├── patientA_rep2_tumor_R2_001.fastq.gz
│ ├── patientB_rep1_normal_R1_001.fastq.gz
│ ├── patientB_rep1_normal_R2_001.fastq.gz
│ ├── patientB_rep1_tumor_R1_001.fastq.gz
│ ├── patientB_rep1_tumor_R2_001.fastq.gz
│ ├── patientC_rep1_normal_R1_001.fastq.gz
│ ├── patientC_rep1_normal_R2_001.fastq.gz
│ ├── patientC_rep1_tumor_R1_001.fastq.gz
│ └── patientC_rep1_tumor_R2_001.fastq.gz
└── file_operations.nf
This directory contains paired-end sequencing data from three patients (A, B, C).
For each patient, we have samples that are of type tumor (typically originating from tumor biopsies) or normal (taken from healthy tissue or blood).
If you're not familiar with cancer analysis, just know that this corresponds to an experimental model that uses paired tumor/normal samples to perform contrastive analyses.
For patient A specifically, we have two sets of technical replicates (repeats).
The sequencing data files are named with a typical _R1_ and _R2_ convention for what are known as 'forward reads' and 'reverse reads'.
Don't worry if you're not familiar with this experimental design, it's not critical for understanding this tutorial.
Review the assignment¶
Your challenge is to write a Nextflow workflow that will parse the samplesheet, extract basic metadata from the file naming structure, and use that metadata to organize the analysis and outputs appropriately.
Readiness checklist¶
Think you're ready to dive in?
- I understand the goal of this course and its prerequisites
- My codespace is up and running
- I've set my working directory appropriately
- I understand the assignment
If you can check all the boxes, you're good to go.
1. Basic file operations¶
1.1. Identify the type of an object with .class¶
Take a look at the workflow file file_operations.nf:
| file_operations.nf | |
|---|---|
This is a mini-workflow (without any processes) that refers to a single file path in its workflow, then prints it to the console, along with its class.
What is .class?
In Nextflow, .class tells us what type of object we're working with. It's like asking "what kind of thing is this?" to find out whether it's a string, a number, a file, or something else.
This will help us illustrate the difference between a plain string and a Path object in the next sections.
Let's run the workflow:
Output
As you can see, Nextflow printed the string path exactly as we wrote it.
This is just text output; Nextflow hasn't done anything special with it yet.
We've also confirmed that as far as Nextflow is concerned, this is only a string (of class java.lang.String).
That makes sense, since we haven't yet told Nextflow that it corresponds to a file.
1.2. Create a Path object with file()¶
We can tell Nextflow how to handle files by creating Path objects from path strings.
In our workflow, we can convert the string path data/patientA_rep1_normal_R1_001.fastq.gz to a Path object using the file() method, which provides access to file properties and operations.
Edit the file_operations.nf to wrap the string with file() as follows:
Now run the workflow again:
Output
This time, you see the full absolute path instead of the relative path we provided as input.
Nextflow has converted our string into a Path object and resolved it to the actual file location on the system.
The file path will now be absolute, as in /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R1_001.fastq.gz.
Notice also that the Path object class is sun.nio.fs.UnixPath: this is Nextflow's way of representing local files.
As we'll see later, remote files will have different class names (such as nextflow.file.http.XPath for HTTP files), but they all work exactly the same way and can be used identically in your workflows.
Tip
The key difference:
- Path string: Just text that Nextflow treats as characters
- Path object: A smart file reference that Nextflow can work with
Think of it like this: a path string is like writing an address on paper, while a Path object is like having the address loaded in a GPS device that knows how to navigate to there and can tell you details about the journey.
1.3. Access file attributes¶
Why is this helpful? Well, now that Nextflow understands that myFile is a Path object and not just a string, we can access the various attributes of the Path object.
Let's update our workflow to print out the built-in file attributes:
Run the workflow:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [ecstatic_ampere] DSL2 - revision: f3fa3dcb48
File object class: sun.nio.fs.UnixPath
File name: patientA_rep1_normal_R1_001.fastq.gz
Simple name: patientA_rep1_normal_R1_001
Extension: gz
Parent directory: /workspaces/training/side-quests/working_with_files/data
You see the various file attributes printed to the console above.
1.4. Solve basic file input problems¶
The difference between strings and Path objects becomes critical when you start building actual workflows with processes.
This often trips up newcomers to Nextflow, so let's take a few minutes to work through the case of a workflow where this has been done wrong.
1.4.1. Diagnose the underlying problem¶
We've given you a small workflow called count_lines.nf that is meant to take a text file (with a file path hardcoded) and count how many lines are in it.
Don't look at the code just yet, just run it as follows :
This workflow should fail; have a look through the output and find the error message.
Output
N E X T F L O W ~ version 25.04.3
Launching `count_lines.nf` [goofy_koch] DSL2 - revision: 4d9e909d80
executor > local (1)
[7f/c22b7f] COUNT_LINES [ 0%] 0 of 1
ERROR ~ Error executing process > 'COUNT_LINES'
Caused by:
Process `COUNT_LINES` terminated with an error exit status (1)
Command executed:
executor > local (1)
[7f/c22b7f] COUNT_LINES [ 0%] 0 of 1 ✘
WARN: Got an interrupted exception while taking agent result | java.lang.InterruptedException
ERROR ~ Error executing process > 'COUNT_LINES'
Caused by:
Process `COUNT_LINES` terminated with an error exit status (1)
Command executed:
set -o pipefail
echo "Processing file: data/patientA_rep1_normal_R1_001.fastq.gz"
gzip -dc data/patientA_rep1_normal_R1_001.fastq.gz | wc -l
Command exit status:
1
Command output:
Processing file: data/patientA_rep1_normal_R1_001.fastq.gz
0
Command error:
Processing file: data/patientA_rep1_normal_R1_001.fastq.gz
gzip: data/patientA_rep1_normal_R1_001.fastq.gz: No such file or directory
0
Work dir:
/workspaces/training/side-quests/working_with_files/work/7f/c22b7f6f86c81f14d53de15584fdd5
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
-- Check '.nextflow.log' file for details
This shows a lot of details about the error because the process is set to output debugging information; more about that in a bit.
These are the most relevant sections:
Command executed:
set -o pipefail
echo "Processing file: data/patientA_rep1_normal_R1_001.fastq.gz"
gzip -dc data/patientA_rep1_normal_R1_001.fastq.gz | wc -l
Command error:
Processing file: data/patientA_rep1_normal_R1_001.fastq.gz
gzip: data/patientA_rep1_normal_R1_001.fastq.gz: No such file or directory
0
This says the system couldn't find the file; however if you look up the path, there is a file by that name in that location. So what's wrong?
Let's open the count_lines.nf workflow and have a look at the code.
| count_lines.nf | |
|---|---|
As advertised, this is a small workflow with one process (COUNT_LINES) that is meant to take a file input and count how many lines are in it.
What does debug true do?
The debug true directive in the process definition causes Nextflow to print the output from your script (like the line count "40") directly in the execution log.
Without this, you would only see the process execution status but not the actual output from your script.
For more information on debugging Nextflow processes, see the Debugging Nextflow Workflows side quest.
Can you find the error? Have a look at the process input definition.
The input is marked as a val, which indicates a value input, and tries to treat it as a file.
When we ran this, Nextflow passed the string value through to the script, but it didn't stage the actual file in the working directory.
So the process tried to use the relative string, data/patientA_rep1_normal_R1_001.fastq.gz, but that file doesn't exist within the process working directory, so it failed.
1.4.2. Fix the input definition¶
To fix this problem, we'll need to change the input definition in the process to use a path input:
| file_operations.nf | |
|---|---|
| file_operations.nf | |
|---|---|
Do you think that'll be enough? Let's try it!
Go ahead and run the updated version.
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [mighty_poitras] DSL2 - revision: e996edfc53
[- ] COUNT_LINES -
ERROR ~ Error executing process > 'COUNT_LINES'
Caused by:
Not a valid path value: 'data/patientA_rep1_normal_R1_001.fastq.gz'
Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line
It failed again! But the error is different:
It may not look like it, but this is progress. Nextflow immediately detected the problem and failed before even starting the process.
When you specify a path input, Nextflow validates that you're passing actual file references, not just strings.
It's telling you that 'data/patientA_rep1_normal_R1_001.fastq.gz' is not a valid path value because it's a string, not a Path object.
1.4.3. Fix the file creation statement¶
Now let's finish fixing the issue by using the file() method to create a Path object from our string:
| file_operations.nf | |
|---|---|
| file_operations.nf | |
|---|---|
Let's run this one more time.
Output
This time, it worked correctly!
Nextflow staged the file in the process working directory, so the wc -l command is able to succeed.
Specifically, Nextflow carried out the following operations successfully:
- Staged the file into the working directory
- Decompressed the .gz file
- Counted the lines (40 lines in this case)
- Completed without errors
Takeaway¶
- Path strings vs Path objects: Strings are just text, Path objects are smart file references
- The
file()method converts a string path into a Path object that Nextflow can work with - You can access file properties like
name,simpleName,extension, andparentusing file attributes - Using Path objects instead of strings allows Nextflow to properly manage files in your workflow
- Process Input Outcomes: Proper file handling requires Path objects, not strings, to ensure files are correctly staged and accessible for use by processes.
2. Using remote files¶
One of the key features of Nextflow is the ability to switch seamlessly between local files (on the same machine) to remote files accessible over the internet.
If you're doing it right, you should never need to change the logic of your workflow to accommodate files coming from different locations. All you need to do to use a remote file is to specify the appropriate prefix in the file path when you're supplying it to the workflow.
For example, /path/to/data has no prefix, indicating that it's a 'normal' local file path, whereas s3://path/to/data includes the s3:// prefix, indicating that it's located in Amazon's S3 object storage.
Many different protocols are supported:
- HTTP(S)/FTP (http://, https://, ftp://)
- Amazon S3 (s3://)
- Azure Blob Storage (az://)
- Google Cloud Storage (gs://)
To use any of these, simply specify the relevant prefix in the string, which is then technically called a Uniform Resource Identifier (URI) instead of a file path. Nextflow will handle authentication and staging the files to the right place, downloading or uploading and all other file operations you would expect.
The key strength of this system is that it enables us to switch between environments without changing any pipeline logic. For example, you can develop with a small, local test set before switching to a full-scale test set located in remote storage simply by changing the URI.
2.1. Use a file from the internet¶
Let's test this out by switching the local path we're providing to our workflow with an HTTPS path pointing to a copy of the same data that is stored in Github.
Warning
This will only work if you have an active internet connection.
Open file_operations.nf again and change the input path as follows:
Let's run the workflow:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [insane_swartz] DSL2 - revision: fff18abe6d
File object class: class nextflow.file.http.XPath
File name: patientA_rep1_normal_R1_001.fastq.gz
Simple name: patientA_rep1_normal_R1_001
Extension: gz
Parent directory: /nextflow-io/training/blob/bb187e3bfdf4eec2c53b3b08d2b60fdd7003b763/side-quests/working_with_files/data
It works! You can see that very little has changed.
The one difference in the console output is that the path object class is now nextflow.file.http.XPath, whereas for the local path the class was sun.nio.fs.UnixPath.
You don't need to remember these classes; we just mention this to demonstrate that Nextflow identifies and handles the different locations appropriately.
Behind the scenes, Nextflow downloaded the file to a staging directory located within the work directory. That staged file can then be treated as a local file and symlinked into the relevant process directory.
You can verify that that happened here by looking at the contents of the working directory located at the hash value of the process.
Note that for larger files, the downloading step will take some extra time compared to running on local files. However, Nextflow checks whether it already has a staged copy to avoid unnecessary downloads. So if you run again on the same file and haven't deleted the staged file, Nextflow will use the staged copy.
This shows how easy it is to switch between local and remote data using Nextflow, which is a key feature of Nextflow.
Note
The one important exception to this principle is that you can't use glob patterns or directory paths with HTTPS because HTTPS cannot list multiple files, so you must specify exact file URLs.
However, other storage protocols such as blob storage (s3://, az://, gs://) can use both globs and directory paths.
Here's how you could use glob patterns with cloud storage:
// S3 with glob patterns - would match multiple files
ch_s3_files = channel.fromPath('s3://my-bucket/data/*.fastq.gz')
// Azure Blob Storage with glob patterns
ch_azure_files = channel.fromPath('az://container/data/patient*_R{1,2}.fastq.gz')
// Google Cloud Storage with glob patterns
ch_gcs_files = channel.fromPath('gs://bucket/data/sample_*.fastq.gz')
We'll show you how to work with globs in practice in the next section.
2.2. Switch back to the local file¶
We're going to go back to using our local example files for the rest of this side quest, so let's switch the workflow input back to the original file:
Takeaway¶
- Remote data is accessed using a URI (HTTP, FTP, S3, Azure, Google Cloud)
- Nextflow will automatically download and stage the data to the right place, as long as these paths are being fed to processes
- Do not write logic to download or upload remote files!
- Local and remote files produce different object types but work identically
- Important: HTTP/HTTPS only work with single files (no glob patterns)
- Cloud storage (S3, Azure, GCS) supports both single files and glob patterns
- You can seamlessly switch between local and remote data sources without changing code logic (as long as the protocol supports your required operations)
3. Loading files using the fromPath() channel factory¶
So far we've been working with a single file at a time, but in Nextflow, we're typically going to want to create an input channel with multiple input files to process.
A naive way to do that would be to combine the file() method with channel.of() like this:
ch_files = channel.of([file('data/patientA_rep1_normal_R1_001.fastq.gz')],
[file('data/patientA_rep1_normal_R1_001.fastq.gz')])
That works, but it's clunky.
This is where channel.fromPath() comes in: a convenient channel factory that bundles all the functionality we need to generate a channel from one or more static file strings as well as glob patterns.
3.1. Add the channel factory¶
Let's update our workflow to use channel.fromPath.
We've also commented out the code that prints out the attributes for now, and added a .view statement to print out just the filename instead.
Run the workflow:
Output
As you can see, the file path is being loading as a Path type object in the channel.
This is similar to what file() would have done, except now we have a channel that we can load more files into if we want.
Using channel.fromPath() is a convenient way of creating a new channel populated by a list of files.
3.2. View attributes of files in channel¶
In our first pass at using the channel factory, we simplified the code and just printed out the file name.
Let's go back to printing out the full file attributes:
Since myFile is a proper Path object, we have access to all the same class attributes as before.
Run the workflow:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [furious_swanson] DSL2 - revision: c35c34950d
File object class: sun.nio.fs.UnixPath
File name: patientA_rep1_normal_R1_001.fastq.gz
Simple name: patientA_rep1_normal_R1_001
Extension: gz
Parent directory: /workspaces/training/side-quests/working_with_files/data
And there you are, same results as before but now we have the file in a channel, so we can add more.
3.3. Using a glob to match multiple files¶
There are several ways we could load more files into the channel. Here we're going to show you how to use glob patterns, which are a convenient way to match and retrieve file and directory names based on wildcard characters. The process of matching these patterns is called "globbing" or "filename expansion".
Note
As noted previously, Nextflow supports globbing to manage input and output files in the majority of cases, except with HTTPS filepaths because HTTPS cannot list multiple files.
Let's say we want to retrieve both files in a pair of files associated with a given patient, patientA:
Since the only difference between the filenames is the replicate number, i.e. the number after R, we can use the wildcard character * to stand in for the number as follows:
That is the glob pattern we need.
Now all we need to do is update the file path in the channel factory to use that glob pattern as follows:
Nextflow will automatically recognize that this is a glob pattern and will handle it appropriately.
Run the workflow to test that out:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [boring_sammet] DSL2 - revision: d2aa789c9a
File object class: sun.nio.fs.UnixPath
File name: patientA_rep1_normal_R1_001.fastq.gz
Simple name: patientA_rep1_normal_R1_001
Extension: gz
Parent directory: /workspaces/training/side-quests/working_with_files/data
File object class: sun.nio.fs.UnixPath
File name: patientA_rep1_normal_R2_001.fastq.gz
Simple name: patientA_rep1_normal_R2_001
Extension: gz
Parent directory: /workspaces/training/side-quests/working_with_files/data
As you can see, we now have two Path objects in our channel, which shows that Nextflow has done the filename expansion correctly and loaded both files as expected.
Using this method, we can retrieve as many or as few files as we want just by changing the glob pattern. If we made it more generous, for example by replacing all the variable parts of the filenames by * (e.g. data/patient*_rep*_*_R*_001.fastq.gz) we could grab all the example files in the data directory.
Takeaway¶
channel.fromPath()creates a channel with files matching a pattern- Each file is emitted as a separate element in the channel
- We can use a glob pattern to match multiple files
- Files are automatically converted to Path objects with full attributes
- The
.view()method allows inspection of channel contents
4. Extracting basic metadata from filenames¶
In most scientific domains, it's very common to have metadata encoded in the names of the files that contain the data. For example, in bioinformatics, files containing sequencing data are often named in a way that encodes information about the sample, condition, replicate, and read number.
If the filenames are constructed according to a consistent convention, you can extract that metadata in a standardized manner and use it in the course of your analysis. That is a big 'if', of course, and you should be very cautious whenever you rely on filename structure; but the reality is that this approach is very widely used, so let's have a look at how it's done in Nextflow.
In the case of our example data, we know that the filenames include consistently structured metadata.
For example, the filename patientA_rep1_normal_R2_001 encodes the following:
- patient ID:
patientA - replicate ID:
rep1 - sample type:
normal(as opposed totumor) - read set:
R1(as opposed toR2)
We're going to modify our workflow to retrieve this information in three steps:
- Retrieve the
simpleNameof the file, which includes the metadata - Separate the metadata using a method called
tokenize() - Use a map to organize the metadata
Warning
You should never encode sensitive information into filenames, such as patient names or other identifying characteristics, as that can compromise patient privacy or other relevant security restrictions.
4.1. Retrieve the simpleName¶
The simpleName is a file attribute that corresponds to the filename stripped of its path and extension.
Make the following edits to the workflow:
This retrieves the simplename and associates it with the full file object using a map() operation.
Run the workflow to test that it works:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [suspicious_mahavira] DSL2 - revision: ae8edc4e48
[patientA_rep1_normal_R2_001, /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R2_001.fastq.gz]
[patientA_rep1_normal_R1_001, /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R1_001.fastq.gz]
Each element in the channel is now a tuple containing the simpleName and the original file object.
4.2. Extract the metadata from the simplename¶
At this point, the metadata we want is embedded in the simplename, but we can't access individual items directly.
So we need to split the simplename into its components.
Fortunately, those components are simply separated by underscores in the original filename, so we can apply a common Nextflow method called tokenize() that is perfect for this task.
Make the following edits to the workflow:
The tokenize() method will split the simpleName string wherever it finds underscores, and will return a list containing the substrings.
Run the workflow:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [gigantic_gauss] DSL2 - revision: a39baabb57
[[patientA, rep1, normal, R1, 001], /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R1_001.fastq.gz]
[[patientA, rep1, normal, R2, 001], /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R2_001.fastq.gz]
Now the tuple for each element in our channel contains the list of metadata (e.g. [patientA, rep1, normal, R1, 001]) and the original file object.
That's great! We've broken down our patient information from a single string into a list of strings. We can now handle each part of the patient information separately.
4.3. Use a map to organize the metadata¶
Our metadata is just a flat list at the moment. It's easy enough to use but difficult to read.
What is the item at index 3? Can you tell without referring back to the original explanation of the metadata structure?
This is a great opportunity to use a key-value store, where every item has a set of keys and their associated values, so you can easily refer to each key to get the corresponding value.
In our example, that means going from this organization:
To this one:
In Nextflow, that's called a map.
Let's convert our flat list into a map now. Make the following edits to the workflow:
While we're at it, we also simplified a couple of the metadata strings using a string replacement method called replace() to remove some characters that are unnecessary (e.g. readNum.replace('rep', '') to keep only the number from the replicate IDs).
Let's run the workflow again:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [infallible_swartz] DSL2 - revision: 7f4e68c0cb
[[id:patientA, replicate:1, type:normal, readNum:2], /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R2_001.fastq.gz]
[[id:patientA, replicate:1, type:normal, readNum:1], /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R1_001.fastq.gz]
Now the metadata is neatly labeled (e.g. [id:patientA, replicate:1, type:normal, readNum:2]) so it's a lot easier to tell what is what.
It'll also be a lot easier to actually make use of elements of metadata in the workflow, and will make our code easier to read and more maintainable.
Takeaway¶
- We can handle filenames in Nextflow with the power of a full programming language
- We can treat the filenames as strings to extract relevant information
- Use of methods like
tokenize()andreplace()allows us to manipulate strings in the filename - The
.map()operation transforms channel elements while preserving structure - Structured metadata (maps) makes code more readable and maintainable than positional lists
Next up, we will look at how to handle paired data files.
5. Handling paired data files¶
Many experimental designs produce paired data files that benefit from being handled in an explicitly paired way. For example, in bioinformatics, sequencing data is often generated in the form of paired reads, meaning sequence strings that originate from the same fragment of DNA (often called 'forward' and 'reverse' because they are read from opposite ends).
That is the case of our example data, where R1 and R2 refer to the two sets of reads.
Nextflow provides a specialized channel factory for working with paired files like this called channel.fromFilePairs(), which automatically groups files based on a shared naming pattern. That allows you to associate the paired files more tightly with less effort.
We're going to modify our workflow to take advantage of this. It's going to take two steps:
- Switch the channel factory to
channel.fromFilePairs() - Extract and map the metadata
5.1. Switch the channel factory to channel.fromFilePairs()¶
To use channel.fromFilePairs, we need to specify the pattern that Nextflow should use to identify the two members in a pair.
Going back to our example data, we can formalize the naming pattern as follows:
This is similar to the glob pattern we used earlier, except this specifically enumerates the substrings (either 1 or 2 coming right after the R) that identify the two members of the pair.
Let's update the workflow file_operations.nf accordingly:
| file_operations.nf | |
|---|---|
We've switched the channel factory and adapted the file matching pattern, and while we were at it, we commented out the map operation. We'll add that back in later, with a few modifications.
Run the workflow to test it:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [chaotic_cuvier] DSL2 - revision: 472265a440
[patientA_rep1_normal_R, [/workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R1_001.fastq.gz, /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R2_001.fastq.gz]]
Compared to earlier, the structure of the output is a bit different.
We see only one channel element, composed of a tuple containing two items: the part of the simpleName shared by the two files, which serves as an identifier, and a tuple containing the two file objects, in the format id, [ file1, file2 ].
Great, Nextflow has done the hard work of extracting the patient name by examining the shared prefix and using it as a patient identifier. However, we still need to get the rest of the metadata.
5.2. Extract and organize metadata from file pairs¶
Our map operation from before won't work because it doesn't match the data structure, but we can modify it to work.
We already have access to the actual patient identifier in the string that fromFilePairs() used as an identifier, so we can use that to extract the metadata without getting the simpleName from the Path object like we did before.
Uncomment the map operation in the workflow and make the following edits:
This time the map starts from id, files instead of just myFile, and tokenize() is applied to id instead of to myFile.simpleName.
Notice also that we've dropped readNum from the tokenize() line; any substrings that we don't specifically name (starting from the left) will be silently dropped.
We can do this because the paired files are now tightly associated, so we no longer need readNum in the metadata map.
Let's run the workflow:
Output
N E X T F L O W ~ version 25.04.3
Launching `file_operations.nf` [prickly_stonebraker] DSL2 - revision: f62ab10a3f
[[id:patientA, replicate:1, type:normal], [/workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R1_001.fastq.gz, /workspaces/training/side-quests/working_with_files/data/patientA_rep1_normal_R2_001.fastq.gz]]
And there it is: we have the metadata map ([id:patientA, replicate:1, type:normal]) in the first position of the output tuple, followed by the tuple of paired files, as intended.
Of course, this will only pick up and process that specific pair of files.
If you want to experiment with processing multiple pairs, you can try adding wildcards into the input pattern and see what happens.
Foe example, try using data/patientA_rep1_*_R{1,2}_001.fastq.gz
Takeaway¶
channel.fromFilePairs()automatically finds and pairs related files- This simplifies handling paired-end reads in your pipeline
- Paired files can be grouped as
[id, [file1, file2]]tuples - Metadata extraction can be done from the paired file ID rather than individual files
6. Using file operations in processes¶
Now let's put all this together in a simple process to reinforce how to use file operations inside a Nextflow process.
6.1. Create the process¶
We'll keep it simple and make a process called ANALYZE_READS that takes in a tuple of metadata and a pair of input files and analyses them.
We could imagine this is an alignment, or variant calling or any other step that makes sense for this data type.
Add the following to the top of your file_operations.nf file:
| file_operations.nf | |
|---|---|
Note
We are calling our metadata map meta by convention.
For a deeper dive into meta maps, see Working with metadata.
6.2. Call the process in the workflow¶
Now let's add a call to the process.
For readability, we create a new channel named ch_samples to hold the contents of the channel of files after mapping, and feed that to the ANALYZE_READS process.
Make the following edits to the workflow:
| file_operations.nf | |
|---|---|
| file_operations.nf | |
|---|---|
Note that we've deleted the view() statement.
Now let's see it in action! Run the workflow:
Output
This process is set up to publish its outputs to a results directory, so have a look in there.
Sample metadata: patientA
Replicate: 1
Type: normal
Read 1: patientA_rep1_normal_R1_001.fastq.gz
Read 2: patientA_rep1_normal_R2_001.fastq.gz
Read 1 size: 10 reads
Read 2 size: 10 reads
The process took our inputs and created a new file containing the patient metadata, as designed. Splendid!
6.3. Include many more patients¶
Of course, this is just processing a single pair of files for a single patient, which is not exactly the kind of high throughput you've hoping to get with Nextflow. You'll probably want to process a lot more data at a time.
Remember channel.fromPath() accepts a glob as input, which means it can accept any number of files that match the pattern.
Therefore if we want to include all the patients, we can simply modify the input string to include more patients, as noted in passing earlier.
Let's pretend we want to be as greedy as possible. Make the following edits to the workflow:
Run the pipeline again:
Output
The results directory should now contain results for all the available data.
results
├── patientA
│ └── patientA_stats.txt
├── patientB
│ └── patientB_stats.txt
└── patientC
└── patientC_stats.txt
Success! We have analyzed all the patients in one go! Right?
Maybe not. If you look more closely, we have a problem: we have two replicates for patientA, but only one output file! We are overwriting the output file each time.
6.4. Make the published files unique¶
Since we have access to the patient metadata, we can use it to make the published files unique by including differentiating metadata, either in the directory structure or in the filenames themselves.
Make the following change to the workflow:
Here we show the option of using additional directory levels to account for sample types and replicates, but you could experiment with doing it at the filename level as well.
Now run the pipeline one more time, but be sure to remove the results directory first to give yourself a clean workspace:
Output
Check the results directory now:
results/
├── normal
│ ├── patientA
│ │ ├── 1
│ │ │ └── patientA_stats.txt
│ │ └── 2
│ │ └── patientA_stats.txt
│ ├── patientB
│ │ └── 1
│ │ └── patientB_stats.txt
│ └── patientC
│ └── 1
│ └── patientC_stats.txt
└── tumor
├── patientA
│ ├── 1
│ │ └── patientA_stats.txt
│ └── 2
│ └── patientA_stats.txt
├── patientB
│ └── 1
│ └── patientB_stats.txt
└── patientC
└── 1
└── patientC_stats.txt
And there it is, all our metadata, neatly organized. That's success!
There's a lot more you can do once you have your metadata loaded into a map like this:
- Create organized output directories based on patient attributes
- Make decisions in processes based on patient properties
- Split, join, and recombine data based on metadata values
This pattern of keeping metadata explicit and attached to the data (rather than encoded in filenames) is a core best practice in Nextflow that enables building robust, maintainable analysis workflows. You can learn more about this in the Metadata Side Quest.
Takeaway¶
- The
publishDirdirective can organize outputs based on metadata values - Metadata in tuples enables structured organization of results
- This approach creates maintainable workflows with clear data provenance
- Processes can take tuples of metadata and files as input
- The
tagdirective provides process identification in execution logs - Workflow structure separates channel creation from process execution
Summary¶
In this side quest, you've learned how to work with files in Nextflow, from basic operations to more advanced techniques for handling collections of files.
Applying these techniques in your own work will enable you to build more efficient and maintainable workflows, especially when working with large numbers of files with complex naming conventions.
Key patterns¶
-
Basic File Operations: We created Path objects with
file()and accessed file attributes like name, extension, and parent directory, learning the difference between strings and Path objects.- Create a Path object with
file()
- Get file attributes
- Create a Path object with
-
Using Remote Files: We learned how to transparently switch between local and remote files using URIs, demonstrating Nextflow's ability to handle files from various sources without changing workflow logic.
- Local file
- FTP
- HTTPS
- Amazon S3
- Azure Blob Storage
- Google Cloud Storage
-
Loading files using the
fromPath()channel factory: We created channels from file patterns withchannel.fromPath()and viewed their file attributes, including object types.- Create a channel from a file pattern
- Get file attributes
-
Extracting Patient Metadata from Filenames: We used
tokenize()andreplace()to extract and structure metadata from filenames, converting them to organized maps. -
Simplifying with channel.fromFilePairs: We used
channel.fromFilePairs()to automatically pair related files and extract metadata from paired file IDs. -
Using File Operations in Processes: We integrated file operations into Nextflow processes with proper input handling, using
publishDirto organize outputs based on metadata.- Associate a meta map with the process inputs
ch_files = channel.fromFilePairs('data/patientA_rep1_normal_R{1,2}_001.fastq.gz') ch_samples = ch_files.map { id, files -> def (sample, replicate, type, readNum) = id.tokenize('_') [ [ id: sample, replicate: replicate.replace('rep', ''), type: type ], files ] } ANALYZE_READS(ch_samples)- Organize outputs based on metadata
Additional resources¶
What's next?¶
Return to the menu of Side Quests or click the button in the bottom right of the page to move on to the next topic in the list.