Git Product home page Git Product logo

fastp's Introduction

install with conda install with conda DebianBadge fastp ci

fastp

A tool designed to provide fast all-in-one preprocessing for FastQ files. This tool is developed in C++ with multithreading supported to afford high performance.

Citation: Shifu Chen. 2023. Ultrafast one-pass FASTQ data preprocessing, quality control, and deduplication using fastp. iMeta 2: e107. https://doi.org/10.1002/imt2.107

features

  1. comprehensive quality profiling for both before and after filtering data (quality curves, base contents, KMER, Q20/Q30, GC Ratio, duplication, adapter contents...)
  2. filter out bad reads (too low quality, too short, or too many N...)
  3. cut low quality bases for per read in its 5' and 3' by evaluating the mean quality from a sliding window (like Trimmomatic but faster).
  4. trim all reads in front and tail
  5. cut adapters. Adapter sequences can be automatically detected, which means you don't have to input the adapter sequences to trim them.
  6. correct mismatched base pairs in overlapped regions of paired end reads, if one base is with high quality while the other is with ultra low quality
  7. trim polyG in 3' ends, which is commonly seen in NovaSeq/NextSeq data. Trim polyX in 3' ends to remove unwanted polyX tailing (i.e. polyA tailing for mRNA-Seq data)
  8. preprocess unique molecular identifier (UMI) enabled data, shift UMI to sequence name.
  9. report JSON format result for further interpreting.
  10. visualize quality control and filtering results on a single HTML page (like FASTQC but faster and more informative).
  11. split the output to multiple files (0001.R1.gz, 0002.R1.gz...) to support parallel processing. Two modes can be used, limiting the total split file number, or limitting the lines of each split file.
  12. support long reads (data from PacBio / Nanopore devices).
  13. support reading from STDIN and writing to STDOUT
  14. support interleaved input
  15. support ultra-fast FASTQ-level deduplication
  16. ...

If you find a bug or have additional requirement for fastp, please file an issue:https://github.com/OpenGene/fastp/issues/new

simple usage

  • for single end data (not compressed)
fastp -i in.fq -o out.fq
  • for paired end data (gzip compressed)
fastp -i in.R1.fq.gz -I in.R2.fq.gz -o out.R1.fq.gz -O out.R2.fq.gz

By default, the HTML report is saved to fastp.html (can be specified with -h option), and the JSON report is saved to fastp.json (can be specified with -j option).

examples of report

fastp creates reports in both HTML and JSON format.

get fastp

install with Bioconda

install with conda

# note: the fastp version in bioconda may be not the latest
conda install -c bioconda fastp

or download the latest prebuilt binary for Linux users

This binary was compiled on CentOS, and tested on CentOS/Ubuntu

# download the latest build
wget http://opengene.org/fastp/fastp
chmod a+x ./fastp

# or download specified version, i.e. fastp v0.23.4
wget http://opengene.org/fastp/fastp.0.23.4
mv fastp.0.23.4 fastp
chmod a+x ./fastp

or compile from source

fastp depends on libdeflate and libisal, while libisal is not compatible with gcc 4.8. If you use gcc 4.8, your fastp will fail to run. Please upgrade your gcc before you build the libraries and fastp.

Step 1: download and build libisal

See https://github.com/intel/isa-l autoconf, automake, libtools, nasm (>=v2.11.01) and yasm (>=1.2.0) are required to build this isal

git clone https://github.com/intel/isa-l.git
cd isa-l
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib64
make
sudo make install

step 2: download and build libdeflate

See https://github.com/ebiggers/libdeflate

git clone https://github.com/ebiggers/libdeflate.git
cd libdeflate
cmake -B build
cmake --build build
cmake --install build

Step 3: download and build fastp

# get source (you can also use browser to download from master or releases)
git clone https://github.com/OpenGene/fastp.git

# build
cd fastp
make

# Install
sudo make install

You can add -j8 option to make/cmake to use 8 threads for the compilation.

input and output

fastp supports both single-end (SE) and paired-end (PE) input/output.

  • for SE data, you only have to specify read1 input by -i or --in1, and specify read1 output by -o or --out1.
  • for PE data, you should also specify read2 input by -I or --in2, and specify read2 output by -O or --out2.
  • if you don't specify the output file names, no output files will be written, but the QC will still be done for both data before and after filtering.
  • the output will be gzip-compressed if its file name ends with .gz

output to STDOUT

fastp supports streaming the passing-filter reads to STDOUT, so that it can be passed to other compressors like bzip2, or be passed to aligners like bwa and bowtie2.

  • specify --stdout to enable this mode to stream output to STDOUT
  • for PE data, the output will be interleaved FASTQ, which means the output will contain records like record1-R1 -> record1-R2 -> record2-R1 -> record2-R2 -> record3-R1 -> record3-R2 ...

input from STDIN

  • specify --stdin if you want to read the STDIN for processing.
  • if the STDIN is an interleaved paired-end stream, specify --interleaved_in to indicate that.

store the unpaired reads for PE data

  • you can specify --unpaired1 to store the reads that read1 passes filters but its paired read2 doesn't, as well as --unpaired2 for unpaired read2.
  • --unpaired1 and --unpaired2 can be the same, so the unpaired read1/read2 will be written to the same single file.

store the reads that fail the filters

  • give --failed_out to specify the file name to store the failed reads.
  • if one read failed and is written to --failed_out, its failure reason will be appended to its read name. For example, failed_quality_filter, failed_too_short etc.
  • for PE data, if unpaired reads are not stored (by giving --unpaired1 or --unpaired2), the failed pair of reads will be put together. If one read passes the filters but its pair doesn't, the failure reason will be paired_read_is_failing.

process only part of the data

If you don't want to process all the data, you can specify --reads_to_process to limit the reads to be processed. This is useful if you want to have a fast preview of the data quality, or you want to create a subset of the filtered data.

do not overwrite exiting files

You can enable the option --dont_overwrite to protect the existing files not to be overwritten by fastp. In this case, fastp will report an error and quit if it finds any of the output files (read1, read2, json report, html report) already exists before.

split the output to multiple files for parallel processing

See output splitting

merge PE reads

See merge paired-end reads

filtering

Multiple filters have been implemented.

quality filter

Quality filtering is enabled by default, but you can disable it by -Q or disable_quality_filtering. Currently it supports filtering by limiting the N base number (-n, --n_base_limit), and the percentage of unqualified bases.  

To filter reads by its percentage of unqualified bases, two options should be provided:

  • -q, --qualified_quality_phred       the quality value that a base is qualified. Default 15 means phred quality >=Q15 is qualified.
  • -u, --unqualified_percent_limit   how many percents of bases are allowed to be unqualified (0~100). Default 40 means 40%

You can also filter reads by its average quality score

  • -e, --average_qual if one read's average quality score <avg_qual, then this read/pair is discarded. Default 0 means no requirement (int [=0])

length filter

Length filtering is enabled by default, but you can disable it by -L or --disable_length_filtering. The minimum length requirement is specified with -l or --length_required.

For some applications like small RNA sequencing, you may want to discard the long reads. You can specify --length_limit to discard the reads longer than length_limit. The default value 0 means no limitation.

low complexity filter

Low complexity filter is disabled by default, and you can enable it by -y or --low_complexity_filter. The complexity is defined as the percentage of base that is different from its next base (base[i] != base[i+1]). For example:

# a 51-bp sequence, with 3 bases that is different from its next base
seq = 'AAAATTTTTTTTTTTTTTTTTTTTTGGGGGGGGGGGGGGGGGGGGGGCCCC'
complexity = 3/(51-1) = 6%

The threshold for low complexity filter can be specified by -Y or --complexity_threshold. It's range should be 0~100, and its default value is 30, which means 30% complexity is required.

Other filter

New filters are being implemented. If you have a new idea or new request, please file an issue.

adapters

Adapter trimming is enabled by default, but you can disable it by -A or --disable_adapter_trimming. Adapter sequences can be automatically detected for both PE/SE data.

  • For SE data, the adapters are evaluated by analyzing the tails of first ~1M reads. This evaluation may be inacurrate, and you can specify the adapter sequence by -a or --adapter_sequence option. If adapter sequence is specified, the auto detection for SE data will be disabled.
  • For PE data, the adapters can be detected by per-read overlap analysis, which seeks for the overlap of each pair of reads. This method is robust and fast, so normally you don't have to input the adapter sequence even you know it. But you can still specify the adapter sequences for read1 by --adapter_sequence, and for read2 by --adapter_sequence_r2. If fastp fails to find an overlap (i.e. due to low quality bases), it will use these sequences to trim adapters for read1 and read2 respectively.
  • For PE data, the adapter sequence auto-detection is disabled by default since the adapters can be trimmed by overlap analysis. However, you can specify --detect_adapter_for_pe to enable it.
  • For PE data, fastp will run a little slower if you specify the sequence adapters or enable adapter auto-detection, but usually result in a slightly cleaner output, since the overlap analysis may fail due to sequencing errors or adapter dimers.
  • The most widely used adapter is the Illumina TruSeq adapters. If your data is from the TruSeq library, you can add --adapter_sequence=AGATCGGAAGAGCACACGTCTGAACTCCAGTCA --adapter_sequence_r2=AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT to your command lines, or enable auto detection for PE data by specifing detect_adapter_for_pe.
  • fastp contains some built-in known adapter sequences for better auto-detection. If you want to make some adapters to be a part of the built-in adapters, please file an issue.

You can also specify --adapter_fasta to give a FASTA file to tell fastp to trim multiple adapters in this FASTA file. Here is a sample of such adapter FASTA file:

>Illumina TruSeq Adapter Read 1
AGATCGGAAGAGCACACGTCTGAACTCCAGTCA
>Illumina TruSeq Adapter Read 2
AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT
>polyA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

The adapter sequence in this file should be at least 6bp long, otherwise it will be skipped. And you can give whatever you want to trim, rather than regular sequencing adapters (i.e. polyA).

fastp first trims the auto-detected adapter or the adapter sequences given by --adapter_sequence | --adapter_sequence_r2, then trims the adapters given by --adapter_fasta one by one.

The sequence distribution of trimmed adapters can be found at the HTML/JSON reports.

per read cutting by quality score

fastp supports per read sliding window cutting by evaluating the mean quality scores in the sliding window. From v0.19.6, fastp supports 3 different operations, and you enable one or all of them:

  • -5, --cut_front move a sliding window from front (5') to tail, drop the bases in the window if its mean quality is below cut_mean_quality, stop otherwise. Default is disabled. The leading N bases are also trimmed. Use cut_front_window_size to set the widnow size, and cut_front_mean_quality to set the mean quality threshold. If the window size is 1, this is similar as the Trimmomatic LEADING method.
  • -3, --cut_tail move a sliding window from tail (3') to front, drop the bases in the window if its mean quality is below cut_mean_quality, stop otherwise. Default is disabled. The trailing N bases are also trimmed. Use cut_tail_window_size to set the widnow size, and cut_tail_mean_quality to set the mean quality threshold. If the window size is 1, this is similar as the Trimmomatic TRAILING method.
  • -r, --cut_right move a sliding window from front to tail, if meet one window with mean quality < threshold, drop the bases in the window and the right part, and then stop. Use cut_right_window_size to set the widnow size, and cut_right_mean_quality to set the mean quality threshold. This is similar as the Trimmomatic SLIDINGWINDOW method.

WARNING: all these three operations will interfere deduplication for SE data, and --cut_front or --cut_right may also interfere deduplication for PE data. The deduplication algorithms rely on the exact matchment of coordination regions of the grouped reads/pairs.

If --cut_right is enabled, then there is no need to enable --cut_tail, since the former is more aggressive. If --cut_right is enabled together with --cut_front, --cut_front will be performed first before --cut_right to avoid dropping whole reads due to the low quality starting bases.

Please be noted that --cut_front will interfere deduplication for both PE/SE data, and --cut_tail will interfere deduplication for SE data, since the deduplication algorithms rely on the exact matchment of coordination regions of the grouped reads/pairs.

If you don't set window size and mean quality threshold for these function respectively, fastp will use the values from -W, --cut_window_size and -M, --cut_mean_quality

base correction for PE data

fastp perform overlap analysis for PE data, which try to find an overlap of each pair of reads. If an proper overlap is found, it can correct mismatched base pairs in overlapped regions of paired end reads, if one base is with high quality while the other is with ultra low quality. If a base is corrected, the quality of its paired base will be assigned to it so that they will share the same quality.  

This function is not enabled by default, specify -c or --correction to enable it. This function is based on overlapping detection, which has adjustable parameters overlap_len_require (default 30), overlap_diff_limit (default 5) and overlap_diff_percent_limit (default 20%). Please note that the reads should meet these three conditions simultaneously.

global trimming

fastp supports global trimming, which means trim all reads in the front or the tail. This function is useful since sometimes you want to drop some cycles of a sequencing run.

For example, the last cycle of Illumina sequencing is uaually with low quality, and it can be dropped with -t 1 or --trim_tail1=1 option.

  • For read1 or SE data, the front/tail trimming settings are given with -f, --trim_front1 and -t, --trim_tail1.
  • For read2 of PE data, the front/tail trimming settings are given with -F, --trim_front2 and -T, --trim_tail2. But if these options are not specified, they will be as same as read1 options, which means trim_front2 = trim_front1 and trim_tail2 = trim_tail1.
  • If you want to trim the reads to maximum length, you can specify -b, --max_len1 for read1, and -B, --max_len2 for read2. If --max_len1 is specified but --max_len2 is not, --max_len2 will be same as --max_len1. For example, if --max_len1 is specified and read1 is longer than --max_len1, fastp will trim read1 at its tail to make it as long as --max_len1.

Please note that the trimming for --max_len limitation will be applied at the last step. Following are fastp's processing steps that may orderly affect the read lengthes:

1, UMI preprocessing (--umi)
2, global trimming at front (--trim_front)
3, global trimming at tail (--trim_tail)
4, quality pruning at 5' (--cut_front)
5, quality pruning by sliding window (--cut_right)
6, quality pruning at 3' (--cut_tail)
7, trim polyG (--trim_poly_g, enabled by default for NovaSeq/NextSeq data)
8, trim adapter by overlap analysis (enabled by default for PE data)
9, trim adapter by adapter sequence (--adapter_sequence, --adapter_sequence_r2. For PE data, this step is skipped if last step succeeded)
10, trim polyX (--trim_poly_x)
11, trim to max length (---max_len)

polyG tail trimming

For Illumina NextSeq/NovaSeq data, polyG can happen in read tails since G means no signal in the Illumina two-color systems. fastp can detect the polyG in read tails and trim them. This feature is enabled for NextSeq/NovaSeq data by default, and you can specify -g or --trim_poly_g to enable it for any data, or specify -G or --disable_trim_poly_g to disable it. NextSeq/NovaSeq data is detected by the machine ID in the FASTQ records.  

A minimum length can be set with <poly_g_min_len> for fastp to detect polyG. This value is 10 by default.

polyX tail trimming

This feature is similar as polyG tail trimming, but is disabled by default. Use -x or --trim_poly_x to enable it. A minimum length can be set with <poly_x_min_len> for fastp to detect polyX. This value is 10 by default.

When polyG tail trimming and polyX tail trimming are both enabled, fastp will perform polyG trimming first, then perform polyX trimming. This setting is useful for trimming the tails having polyX (i.e. polyA) before polyG. polyG is usually caused by sequencing artifacts, while polyA can be commonly found from the tails of mRNA-Seq reads.

unique molecular identifier (UMI) processing

UMI is useful for duplication elimination and error correction based on generating consensus of reads originated from a same DNA fragment. It's usually used in deep sequencing applications like ctDNA sequencing. Commonly for Illumina platforms, UMIs can be integrated in two different places: index or head of read.   To enable UMI processing, you have to enable -U or --umi option in the command line, and specify --umi_loc to specify the UMI location, it can be one of:

  • index1 the first index is used as UMI. If the data is PE, this UMI will be used for both read1/read2.
  • index2 the second index is used as UMI. PE data only, this UMI will be used for both read1/read2.
  • read1 the head of read1 is used as UMI. If the data is PE, this UMI will be used for both read1/read2.
  • read2 the head of read2 is used as UMI. PE data only, this UMI will be used for both read1/read2.
  • per_index index1_index2 is used as UMI for both read1/read2.
  • per_read define umi1 as the head of read1, and umi2 as the head of read2. umi1_umi2 is used as UMI for both read1/read2.

If --umi_loc is specified with read1, read2 or per_read, the length of UMI should specified with --umi_len.

fastp will extract the UMIs, and append them to the first part of read names, so the UMIs will also be presented in SAM/BAM records. If the UMI is in the reads, then it will be shifted from read so that the read will become shorter. If the UMI is in the index, it will be kept.

A prefix can be specified with --umi_prefix. If prefix is specified, an underline will be used to connect it and UMI. For example, UMI=AATTCCGG, prefix=UMI, then the final string presented in the name will be UMI_AATTCCGG.

If the UMI location is read1/read2/per_read, fastp can skip some bases after UMI to trim the UMI separator and A/T tailing. Specify --umi_skip to enable the number of bases to skip. By default it is not enabled.

UMI example

The original read:

@NS500713:64:HFKJJBGXY:1:11101:1675:1101 1:N:0:TATAGCCT+GACCCCCA
AAAAAAAAGCTACTTGGAGTACCAATAATAAAGTGAGCCCACCTTCCTGGTACCCAGACATTTCAGGAGGTCGGGAAA
+
6AAAAAEEEEE/E/EA/E/AEA6EE//AEE66/AAE//EEE/E//E/AA/EEE/A/AEE/EEA//EEEEEEEE6EEAA

After it's processed with command: fastp -i R1.fq -o out.R1.fq -U --umi_loc=read1 --umi_len=8:  

@NS500713:64:HFKJJBGXY:1:11101:1675:1101:AAAAAAAA 1:N:0:TATAGCCT+GACCCCCA
GCTACTTGGAGTACCAATAATAAAGTGAGCCCACCTTCCTGGTACCCAGACATTTCAGGAGGTCGGGAAA
+
EEE/E/EA/E/AEA6EE//AEE66/AAE//EEE/E//E/AA/EEE/A/AEE/EEA//EEEEEEEE6EEAA

output splitting

For parallel processing of FASTQ files (i.e. alignment in parallel), fastp supports splitting the output into multiple files. The splitting can work with two different modes: by limiting file number or by limiting lines of each file. These two modes cannot be enabled together.  

The file names of these split files will have a sequential number prefix, adding to the original file name specified by --out1 or --out2, and the width of the prefix is controlled by the -d or --split_prefix_digits option. For example, --split_prefix_digits=4, --out1=out.fq, --split=3, then the output files will be 0001.out.fq,0002.out.fq,0003.out.fq

splitting by limiting file number

Use -s or --split to specify how many files you want to have. fastp evaluates the read number of a FASTQ by reading its first ~1M reads. This evaluation is not accurate so the file sizes of the last several files can be a little differnt (a bit bigger or smaller). For best performance, it is suggested to specify the file number to be a multiple of the thread number.

splitting by limiting the lines of each file

Use -S or --split_by_lines to limit the lines of each file. The last files may have smaller sizes since usually the input file cannot be perfectly divided. The actual file lines may be a little greater than the value specified by --split_by_lines since fastp reads and writes data by blocks (a block = 1000 reads).

overrepresented sequence analysis

Overrepresented sequence analysis is disabled by default, you can specify -p or --overrepresentation_analysis to enable it. For consideration of speed and memory, fastp only counts sequences with length of 10bp, 20bp, 40bp, 100bp or (cycles - 2 ).

By default, fastp uses 1/20 reads for sequence counting, and you can change this settings by specifying -P or --overrepresentation_sampling option. For example, if you set -P 100, only 1/100 reads will be used for counting, and if you set -P 1, all reads will be used but it will be extremely slow. The default value 20 is a balance of speed and accuracy.

fastp not only gives the counts of overrepresented sequence, but also gives the information that how they distribute over cycles. A figure is provided for each detected overrepresented sequence, from which you can know where this sequence is mostly found.

merge paired-end reads

For paired-end (PE) input, fastp supports stiching them by specifying the -m/--merge option. In this merging mode:

  • --merged_out shouuld be given to specify the file to store merged reads, otherwise you should enable --stdout to stream the merged reads to STDOUT. The merged reads are also filtered.
  • --out1 and --out2 will be the reads that cannot be merged successfully, but both pass all the filters.
  • --unpaired1 will be the reads that cannot be merged, read1 passes filters but read2 doesn't.
  • --unpaired2 will be the reads that cannot be merged, read2 passes filters but read1 doesn't.
  • --include_unmerged can be enabled to make reads of --out1, --out2, --unpaired1 and --unpaired2 redirected to --merged_out. So you will get a single output file. This option is disabled by default.

--failed_out can still be given to store the reads (either merged or unmerged) failed to passing filters.

In the output file, a tag like merged_xxx_yyywill be added to each read name to indicate that how many base pairs are from read1 and from read2, respectively. For example, @NB551106:9:H5Y5GBGX2:1:22306:18653:13119 1:N:0:GATCAG merged_150_15 means that 150bp are from read1, and 15bp are from read2. fastp prefers the bases in read1 since they usually have higher quality than read2.

Same as the base correction feature, this function is also based on overlapping detection, which has adjustable parameters overlap_len_require (default 30), overlap_diff_limit (default 5) and overlap_diff_percent_limit (default 20%). Please note that the reads should meet these three conditions simultaneously.

duplication rate and deduplication

For both SE and PE data, fastp supports evaluating its duplication rate and removing duplicated reads/pairs. fastp considers one read as duplicated only if its all base pairs are identical as another one. This meas if there is a sequencing error or an N base, the read will not be treated as duplicated.

duplication rate evaluation

By default, fastp evaluates duplication rate, and this module may use 1G memory and take 10% ~ 20% more running time. If you don't need the duplication rate information, you can set --dont_eval_duplication to disable the duplication evaluation. But please be noted that, if deduplication (--dedup) option is enabled, then --dont_eval_duplication option is ignored.

fastp uses a hash algorithm to find the identical sequences. Due to the possible hash collision, about 0.01% of the total reads may be wrongly recognized as deduplicated reads. Normally this may not impact the downstream analysis. The accuracy of calculating duplication can be improved by increasing the hash buffer number or enlarge the buffer size. The option --dup_calc_accuracy can be used to specify the level (1 ~ 6). The higher level means more memory usage and more running time. Please refer to following table:

dup_calc_accuracy level hash buffer number buffer size memory usage speed
1 1 1G 1G ultra-fast default for no-dedup mode
2 1 2G 2G fast
3 2 2G 4G fast default for dedup
4 2 4G 8G fast
5 2 8G 12G fast
6 3 8G 24G moderate

deduplication

Since v0.22.0, fastp supports deduplication for FASTQ data. Specify -D or --dedup to enable this option. When --dedup is enabled, the dup_calc_accuracy level is default to 3, and it can be changed to any value of 1 ~ 6.

all options

usage: fastp -i <in1> -o <out1> [-I <in1> -O <out2>] [options...]
options:
  # I/O options
  -i, --in1                          read1 input file name (string)
  -o, --out1                         read1 output file name (string [=])
  -I, --in2                          read2 input file name (string [=])
  -O, --out2                           read2 output file name (string [=])
      --unpaired1                      for PE input, if read1 passed QC but read2 not, it will be written to unpaired1. Default is to discard it. (string [=])
      --unpaired2                      for PE input, if read2 passed QC but read1 not, it will be written to unpaired2. If --unpaired2 is same as --unpaired1 (default mode), both unpaired reads will be written to this same file. (string [=])
      --failed_out                     specify the file to store reads that cannot pass the filters. (string [=])
      --overlapped_out                 for each read pair, output the overlapped region if it has no any mismatched base. (string [=])
  -m, --merge                          for paired-end input, merge each pair of reads into a single read if they are overlapped. The merged reads will be written to the file given by --merged_out, the unmerged reads will be written to the files specified by --out1 and --out2. The merging mode is disabled by default.
      --merged_out                     in the merging mode, specify the file name to store merged output, or specify --stdout to stream the merged output (string [=])
      --include_unmerged               in the merging mode, write the unmerged or unpaired reads to the file specified by --merge. Disabled by default.
  -6, --phred64                      indicate the input is using phred64 scoring (it'll be converted to phred33, so the output will still be phred33)
  -z, --compression                  compression level for gzip output (1 ~ 9). 1 is fastest, 9 is smallest, default is 4. (int [=4])
      --stdin                          input from STDIN. If the STDIN is interleaved paired-end FASTQ, please also add --interleaved_in.
      --stdout                         output passing-filters reads to STDOUT. This option will result in interleaved FASTQ output for paired-end input. Disabled by default.
      --interleaved_in                 indicate that <in1> is an interleaved FASTQ which contains both read1 and read2. Disabled by default.
      --reads_to_process             specify how many reads/pairs to be processed. Default 0 means process all reads. (int [=0])
      --dont_overwrite               don't overwrite existing files. Overwritting is allowed by default.
      --fix_mgi_id                     the MGI FASTQ ID format is not compatible with many BAM operation tools, enable this option to fix it.

  # adapter trimming options
  -A, --disable_adapter_trimming     adapter trimming is enabled by default. If this option is specified, adapter trimming is disabled
  -a, --adapter_sequence               the adapter for read1. For SE data, if not specified, the adapter will be auto-detected. For PE data, this is used if R1/R2 are found not overlapped. (string [=auto])
      --adapter_sequence_r2            the adapter for read2 (PE data only). This is used if R1/R2 are found not overlapped. If not specified, it will be the same as <adapter_sequence> (string [=])
      --adapter_fasta                  specify a FASTA file to trim both read1 and read2 (if PE) by all the sequences in this FASTA file (string [=])
      --detect_adapter_for_pe          by default, the adapter sequence auto-detection is enabled for SE data only, turn on this option to enable it for PE data.

  # global trimming options
  -f, --trim_front1                    trimming how many bases in front for read1, default is 0 (int [=0])
  -t, --trim_tail1                     trimming how many bases in tail for read1, default is 0 (int [=0])
  -b, --max_len1                       if read1 is longer than max_len1, then trim read1 at its tail to make it as long as max_len1. Default 0 means no limitation (int [=0])
  -F, --trim_front2                    trimming how many bases in front for read2. If it's not specified, it will follow read1's settings (int [=0])
  -T, --trim_tail2                     trimming how many bases in tail for read2. If it's not specified, it will follow read1's settings (int [=0])
  -B, --max_len2                       if read2 is longer than max_len2, then trim read2 at its tail to make it as long as max_len2. Default 0 means no limitation. If it's not specified, it will follow read1's settings (int [=0])

  # duplication evaluation and deduplication
  -D, --dedup                          enable deduplication to drop the duplicated reads/pairs
      --dup_calc_accuracy              accuracy level to calculate duplication (1~6), higher level uses more memory (1G, 2G, 4G, 8G, 16G, 24G). Default 1 for no-dedup mode, and 3 for dedup mode. (int [=0])
      --dont_eval_duplication          don't evaluate duplication rate to save time and use less memory.

  # polyG tail trimming, useful for NextSeq/NovaSeq data
  -g, --trim_poly_g                  force polyG tail trimming, by default trimming is automatically enabled for Illumina NextSeq/NovaSeq data
      --poly_g_min_len                 the minimum length to detect polyG in the read tail. 10 by default. (int [=10])
  -G, --disable_trim_poly_g          disable polyG tail trimming, by default trimming is automatically enabled for Illumina NextSeq/NovaSeq data

  # polyX tail trimming
  -x, --trim_poly_x                    enable polyX trimming in 3' ends.
      --poly_x_min_len                 the minimum length to detect polyX in the read tail. 10 by default. (int [=10])

  # per read cutting by quality options
  -5, --cut_front                      move a sliding window from front (5') to tail, drop the bases in the window if its mean quality < threshold, stop otherwise.
  -3, --cut_tail                       move a sliding window from tail (3') to front, drop the bases in the window if its mean quality < threshold, stop otherwise.
  -r, --cut_right                      move a sliding window from front to tail, if meet one window with mean quality < threshold, drop the bases in the window and the right part, and then stop.
  -W, --cut_window_size                the window size option shared by cut_front, cut_tail or cut_sliding. Range: 1~1000, default: 4 (int [=4])
  -M, --cut_mean_quality               the mean quality requirement option shared by cut_front, cut_tail or cut_sliding. Range: 1~36 default: 20 (Q20) (int [=20])
      --cut_front_window_size          the window size option of cut_front, default to cut_window_size if not specified (int [=4])
      --cut_front_mean_quality         the mean quality requirement option for cut_front, default to cut_mean_quality if not specified (int [=20])
      --cut_tail_window_size           the window size option of cut_tail, default to cut_window_size if not specified (int [=4])
      --cut_tail_mean_quality          the mean quality requirement option for cut_tail, default to cut_mean_quality if not specified (int [=20])
      --cut_right_window_size          the window size option of cut_right, default to cut_window_size if not specified (int [=4])
      --cut_right_mean_quality         the mean quality requirement option for cut_right, default to cut_mean_quality if not specified (int [=20])

  # quality filtering options
  -Q, --disable_quality_filtering    quality filtering is enabled by default. If this option is specified, quality filtering is disabled
  -q, --qualified_quality_phred      the quality value that a base is qualified. Default 15 means phred quality >=Q15 is qualified. (int [=15])
  -u, --unqualified_percent_limit    how many percents of bases are allowed to be unqualified (0~100). Default 40 means 40% (int [=40])
  -n, --n_base_limit                 if one read's number of N base is >n_base_limit, then this read/pair is discarded. Default is 5 (int [=5])
  -e, --average_qual                 if one read's average quality score <avg_qual, then this read/pair is discarded. Default 0 means no requirement (int [=0])


  # length filtering options
  -L, --disable_length_filtering     length filtering is enabled by default. If this option is specified, length filtering is disabled
  -l, --length_required              reads shorter than length_required will be discarded, default is 15. (int [=15])
      --length_limit                 reads longer than length_limit will be discarded, default 0 means no limitation. (int [=0])

  # low complexity filtering
  -y, --low_complexity_filter          enable low complexity filter. The complexity is defined as the percentage of base that is different from its next base (base[i] != base[i+1]).
  -Y, --complexity_threshold           the threshold for low complexity filter (0~100). Default is 30, which means 30% complexity is required. (int [=30])

  # filter reads with unwanted indexes (to remove possible contamination)
      --filter_by_index1               specify a file contains a list of barcodes of index1 to be filtered out, one barcode per line (string [=])
      --filter_by_index2               specify a file contains a list of barcodes of index2 to be filtered out, one barcode per line (string [=])
      --filter_by_index_threshold      the allowed difference of index barcode for index filtering, default 0 means completely identical. (int [=0])

  # base correction by overlap analysis options
  -c, --correction                   enable base correction in overlapped regions (only for PE data), default is disabled
      --overlap_len_require            the minimum length to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. 30 by default. (int [=30])
      --overlap_diff_limit             the maximum number of mismatched bases to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. 5 by default. (int [=5])
      --overlap_diff_percent_limit     the maximum percentage of mismatched bases to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. Default 20 means 20%. (int [=20])

  # UMI processing
  -U, --umi                          enable unique molecular identifier (UMI) preprocessing
      --umi_loc                      specify the location of UMI, can be (index1/index2/read1/read2/per_index/per_read, default is none (string [=])
      --umi_len                      if the UMI is in read1/read2, its length should be provided (int [=0])
      --umi_prefix                   if specified, an underline will be used to connect prefix and UMI (i.e. prefix=UMI, UMI=AATTCG, final=UMI_AATTCG). No prefix by default (string [=])
      --umi_skip                       if the UMI is in read1/read2, fastp can skip several bases following UMI, default is 0 (int [=0])

  # overrepresented sequence analysis
  -p, --overrepresentation_analysis    enable overrepresented sequence analysis.
  -P, --overrepresentation_sampling    One in (--overrepresentation_sampling) reads will be computed for overrepresentation analysis (1~10000), smaller is slower, default is 20. (int [=20])

  # reporting options
  -j, --json                         the json format report file name (string [=fastp.json])
  -h, --html                         the html format report file name (string [=fastp.html])
  -R, --report_title                 should be quoted with ' or ", default is "fastp report" (string [=fastp report])

  # threading options
  -w, --thread                       worker thread number, default is 3 (int [=3])

  # output splitting options
  -s, --split                        split output by limiting total split file number with this option (2~999), a sequential number prefix will be added to output name ( 0001.out.fq, 0002.out.fq...), disabled by default (int [=0])
  -S, --split_by_lines               split output by limiting lines of each file with this option(>=1000), a sequential number prefix will be added to output name ( 0001.out.fq, 0002.out.fq...), disabled by default (long [=0])
  -d, --split_prefix_digits          the digits for the sequential number padding (1~10), default is 4, so the filename will be padded as 0001.xxx, 0 to disable padding (int [=4])

  # help
  -?, --help                         print this message

citations

Shifu Chen. 2023. Ultrafast one-pass FASTQ data preprocessing, quality control, and deduplication using fastp. iMeta 2: e107. https://doi.org/10.1002/imt2.107

Shifu Chen, Yanqing Zhou, Yaru Chen, Jia Gu; fastp: an ultra-fast all-in-one FASTQ preprocessor, Bioinformatics, Volume 34, Issue 17, 1 September 2018, Pages i884–i890, https://doi.org/10.1093/bioinformatics/bty560

fastp's People

Contributors

apeltzer avatar asan-emirsaleh avatar b0r1sd avatar bgruening avatar bichkd avatar cbrueffer avatar daissi avatar ghuls avatar kimbioinfostudio avatar kislyuk avatar knachte avatar neoformit avatar nh13 avatar olgabot avatar oschwengers avatar redmar-van-den-berg avatar sanjaymsh avatar sfchen avatar tmaklin avatar vmikk avatar wdu avatar y9c avatar yongxinliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastp's Issues

Request: supporting for amplicon sequencing

Amplicon sequencing using a set of artificial, amplicon-specific primer. If a reads is beginning with this primer, it is a target reads, and artificial primer should be removed. Otherwise it should be filtered.

Order of execution?

Can I run different steps in one single call? E.g. quality, adaptor, poly G and "global" trimming? If yes, what is the order of execution? Or do I have to run the distinct trimming step each on its own?

requirement: splitting on the number of lines/reads

Hi,

I'm wondering if it is possible to add a new split option: is it possible to split files by a certain number of reads and not in a certain number of sub-files?
It could be useful if you want to parallelize and standardize the downstream alignments (guess the execution time of each sub-sample) and you don't know the size of your input fastq.gz file...

installation issue on CentOS6.9

Hi,

This looks like a great tool and I would like to give it a try, but I encountered an issue when attempting to install it. This issue might be due to our old server, which is a CentOS6.9 with gcc 4.8.2. Are there other system parameters you want to know? Installation went fine on my own Ubuntu 17.10.

I downloaded the release 0.5.0 (the same happens after cloning from GitHub), and executed make:

g++ -std=c++11 -g -I./inc -O3 -c  src/adaptertrimmer.cpp -o obj/adaptertrimmer.o
g++ -std=c++11 -g -I./inc -O3 -c  src/evaluator.cpp -o obj/evaluator.o
g++ -std=c++11 -g -I./inc -O3 -c  src/fastqreader.cpp -o obj/fastqreader.o
g++ -std=c++11 -g -I./inc -O3 -c  src/filter.cpp -o obj/filter.o
g++ -std=c++11 -g -I./inc -O3 -c  src/filterresult.cpp -o obj/filterresult.o
g++ -std=c++11 -g -I./inc -O3 -c  src/htmlreporter.cpp -o obj/htmlreporter.o
g++ -std=c++11 -g -I./inc -O3 -c  src/jsonreporter.cpp -o obj/jsonreporter.o
g++ -std=c++11 -g -I./inc -O3 -c  src/main.cpp -o obj/main.o
g++ -std=c++11 -g -I./inc -O3 -c  src/options.cpp -o obj/options.o
g++ -std=c++11 -g -I./inc -O3 -c  src/overlapanalysis.cpp -o obj/overlapanalysis.o
g++ -std=c++11 -g -I./inc -O3 -c  src/peprocessor.cpp -o obj/peprocessor.o
g++ -std=c++11 -g -I./inc -O3 -c  src/processor.cpp -o obj/processor.o
g++ -std=c++11 -g -I./inc -O3 -c  src/read.cpp -o obj/read.o
g++ -std=c++11 -g -I./inc -O3 -c  src/seprocessor.cpp -o obj/seprocessor.o
g++ -std=c++11 -g -I./inc -O3 -c  src/sequence.cpp -o obj/sequence.o
g++ -std=c++11 -g -I./inc -O3 -c  src/stats.cpp -o obj/stats.o
g++ -std=c++11 -g -I./inc -O3 -c  src/threadconfig.cpp -o obj/threadconfig.o
g++ -std=c++11 -g -I./inc -O3 -c  src/unittest.cpp -o obj/unittest.o
g++ -std=c++11 -g -I./inc -O3 -c  src/writer.cpp -o obj/writer.o
g++ ./obj/adaptertrimmer.o ./obj/evaluator.o ./obj/fastqreader.o ./obj/filter.o ./obj/filterresult.o ./obj/htmlreporter.o ./obj/jsonreporter.o ./obj/main.o ./obj/options.o ./obj/overlapanalysis.o ./obj/peprocessor.o ./obj/processor.o ./obj/read.o ./obj/seprocessor.o ./obj/sequence.o ./obj/stats.o ./obj/threadconfig.o ./obj/unittest.o ./obj/writer.o  -lz -lpthread -o fastp
./obj/peprocessor.o: In function `PairEndProcessor::initOutput()':
/home/wdecoster/bin/fastp-0.5.0/src/peprocessor.cpp:32: undefined reference to `gzbuffer'
/home/wdecoster/bin/fastp-0.5.0/src/peprocessor.cpp:35: undefined reference to `gzbuffer'
./obj/fastqreader.o: In function `FastqReader::getBytes(unsigned long&, unsigned long&)':
/home/wdecoster/bin/fastp-0.5.0/src/fastqreader.cpp:38: undefined reference to `gzoffset'
collect2: error: ld returned 1 exit status
make: *** [fastp] Error 1

Do you have suggestions on how to fix this?

Cheers,
Wouter

Error: Segmentation fault (core dumped)

Dear Developer,
I have PE fastq file, R1.fq + R2.fq have 34225961 pairs reads, total bases 10.2G, and R1.fq file size 12Gb.
When I run fastp command:
fastp -i R1.fq -I R2 -o trim.R1.fastq.gz -O trim.R2.fastq.gz -5 -3 -M 30 -q 30 -l 36 -n 5 -c --html trim.html --json trim.json --report_title "Fastp Report" --thread 10 > trim.log
found error "Segmentation fault (core dumped)".

but when I remove the one of '-5' or '-3' option, there is no error reported.

So I abstract 100000 read and build test.1.fq and test.2.fq, and run fastp with the same command:
fastp -i test.1.fq -I test.2.fq -o trim.R1.fastq.gz -O trim.R2.fastq.gz -5 -3 -M 30 -q 30 -l 36 -n 5 -c --html trim.html --json trim.json --report_title "Fastp Report" --thread 10 > trim.log
It run ok without error.

How can I solve this problem?

HTML report integration into MultiQC

One of the great things about FastQC is that MultiQC can be used to integrate all the quality control data into a single useful HTML.

Is this sort of integration available for Fastp or will it be implemented in the future?

installation issue

Hi,

I have a personal laptop and a work laptop. The personal one is a windows 10 hosting Ubuntu 16.04 on a Oracle virtual box (with anaconda and both python 2.7 and 3.6 installed) and the work laptop is a windows 7 hosting Ubuntu 16.04 on a VMware workstation (with anaconda and python 3.6 installed). I had no problem installing either AfterQC or fastp on my personal laptop.

However, when I was trying to install fastp (thinking that I don't want to install python 2.7 for AfterQC) on my work laptop through bioconda, it tells me that I have conflicts with other packages, as follows:

"Solving environment: failed

UnsatisfiableError: The following specifications were found to be in conflict:

  • blaze -> pytables[version='>=3.0.0'] -> zlib[version='>=1.2.11,<1.3.0a0']
  • fastp
    Use "conda info " to see the dependencies for each package."

When I was trying to install directly from cloning from github, it also failed, with the following error message:

" /usr/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status
Makefile:17: recipe for target 'fastp' failed
make: *** [fastp] Error 1 "

I realize that there must be some insufficiency within my work laptop but I don't know what it is, as I am pretty new to Linux system. I would really like to have it fixed because fastp is a really nicely made package for cleaning up and QC WGS reads. If you have any suggestions or useful tips for resolving this issue, please help me out.

Really appreciate your help. Thanks in advance.

Yun

How to remove adapter clean?

Hi sfchen,
Today, I compared three software (fastp, cutadapt, trimmomatic) , and found fastp very fast but adapter cannot be remove clean.
I upload my result and hopefully you found this useful!
compare_software.xlsx

sfchen commented:
Thanks for the result, from the data, I can see:

for short adapters (7bp), Trimmomatic removes the most adapter, then fastp removes less, and Cutadapt removes the least.
for longer adapters (>7bp), fastp removes much more than Trimmomatic and Cutadapt.
in total, fastp removes the most adapters.

Am I right?

Replay:

Hi sfchen,
My sample real adapter sequece is GATCGGAAGAGCACACGTCTGAACTCCAGTCAC********ATCTCGTATGCCGTCTTCTGCTTG, '*' is 8bp barcode.
And trimmomatic adapter file is :
adapter.list.xlsx

When I get clean fastq data, I split adapter sequece to some short substr, like:
7bp (AGATCGG)
8bp (AGATCGGA)
9bp (AGATCGGAA)
10bp (AGATCGGAAG)
11bp (AGATCGGAAGA)
12bp (AGATCGGAAGAG)
13bp (AGATCGGAAGAGC)

and then statistic the count of reads include this substr adapter sequence.

'out.R1.fq.gz' is not a writable file, quit now

When I use the version 0.13.0, and run the follwing command

fastp -z 1 -i test_1.fq -I test_2.fq --out1 out.R1.fq.gz --out2 out.R2.fq.gz

and then I met the problem"'out.R1.fq.gz' is not a writable file, quit now"

After touch out.R1.fq.gz firstly , this problem will be overcome.

Support UMI pre-processing with a 3rd file containing UMIs

Shifu;
Thanks for this great tool and adding pre-processing for UMIs. I've been looking for faster options to replace our use of umis (https://github.com/vals/umis) for pre-processing UMI outputs and adding into read headers. We typically end up with UMIs in a 3rd file as outputs from bcl2fastq when the UMIs are present in the input reads, and I wondered if this is possible to support?

I had a quick dig into the code to start implementing but realized you have specialized iterators for pairs so didn't want to break too much by trying to have a 3 input iterator, thinking there might be a better way to integrate.

Here is an example case with R1/R3 as the first/second read pair and R2 as the UMI:

https://s3.amazonaws.com/chapmanb/testcases/fastp_umi_example.tar.gz

Thanks for any thoughts and suggestions for processing these with fastp.

Not an Issue but a function request

Hi,

would it be nice to have an option to add a report name at the top of the report, Just before the summary. I use the -h option and this specified name added to the top of the report would help or a new option -R Report name included in generated report (string [= reportname])

Thanks,

B.

Add verbose output option (-V)

When I run fastp there is no output until the end.

I would like to see what is happening.
Would it be possible to add some progress messages?
-V / --verbose

Thanks

Discrepancy in results between screen output and report out

Hi,

Trying to figure out the onscreen results when fastp finishes running and the results summary in the generated report file.

The numbers don't addup....

Thanks for the help.

================
Example when fastp finishes and the output on screen gives:

Read1 before filtering:
total reads: 4000000
total bases: 400000000
Q20 bases: 392516035(98.129%)
Q30 bases: 376358571(94.0896%)

Read1 after filtering:
total reads: 3102308
total bases: 309754624
Q20 bases: 306341522(98.8981%)
Q30 bases: 294745910(95.1546%)

Read2 before filtering:
total reads: 4000000
total bases: 400000000
Q20 bases: 387859490(96.9649%)
Q30 bases: 373512577(93.3781%)

Read2 aftering filtering:
total reads: 3102308
total bases: 309754624
Q20 bases: 305723840(98.6987%)
Q30 bases: 295222362(95.3085%)

Filtering result:
reads passed filter: 6204616
reads failed due to low quality: 235470
reads failed due to too many N: 1559914
reads failed due to too short: 0
reads with adapter trimmed: 1355796
bases trimmed due to adapters: 17275112

=================================
These are the results in the report:

fastp report
Summary
General
fastp version: 0.7.0
sequencing: paired end (100 cycles + 100 cycles)

Before filtering
total reads: 7.629395 M
total bases: 762.939453 M
Q20 bases: 744.224095 M (97.546941%)
Q30 bases: 715.132854 M (93.733893%)

After filtering
total reads: 5.917183 M
total bases: 590.810059 M
Q20 bases: 583.711016 M (98.798422%)
Q30 bases: 562.637589 M (95.231552%)

Filtering result
reads passed filters: 5.917183 M (77.557700%)
reads with low quality: 229.951172 K (2.943375%)
reads with too many N: 1.487650 M (19.498925%)
reads too short: 0 (0.000000%)

bug (?): non standard FATQ file on paired read

Dear Developer,

I have some trouble with this application: I'm trying to filter my paired reads fastq files with fastp with two different sets of filters.

Here is a sample of my 2 fastq files (gunzip -c file_X.fastq.gz | tail -n 50):
ffcf607a-7b70-4e16-a60b-c09197fa1601_1.fastq.gz
@HWI-1KL150:70:C74KBACXX:1:1101:1931:1994 1:N:0:ATGCCT NTCTTTCTGACCCTCACTGAGAGCGACCTGAAGGAAATTGGCATCACGTGCGTCCAGAAGGGCCGCTCTGGCCCTCAGCCCGGGGTTGGGGCAAACTCCCA + #1=DFFFFGHFHHIJEIJJIEIJGIIIGIIJFHJICEICGGICIII@GFHIGCHGAHECHFBF=A@?BDDDDDDD8<CCCDD<@BB<>BBBB9?<<ACCD( @HWI-1KL150:70:C74KBACXX:1:1101:3880:1976 1:N:0:ATGCCT NACTTTCTGTTTTTCCTTTATAGCAAGCAACCCAGTGATAGCAGCCCAGCTCTGGTGAGTGTCCTTGAGCTCTAGAGCACAGCTCTCCTCTCTAAGNNNNN + #1=DFFFFHHHHHJJJJJJJJJJJJJJJIJJJJJJIIJJJIJJJJJJJJJJIJJJ@GGIDFFGHIIJEIJIJHGHAA>DFFFFECCCCCCDDDEDCCACC3 @HWI-1KL150:70:C74KBACXX:1:1101:4185:1976 1:N:0:ATGCCT NACTCCGGGGCTGCTCTGGACCAGTTTCCATTCCCGTCTCCCCACCCTCACCATCCCTCAGGACATCACGAGTGGTTGCTTGGACCTGAGGTGGACATTCT + #1=DFFFFHHHHHJJJJJJGIJIJFHIGIIIJIJJJGIIIJJJJIJJIJJJHHHHHFFFEECEDEDDDC@?@<BBCDDDCDDBCDDCCCDD>BB<A(4:>( @HWI-1KL150:70:C74KBACXX:1:1101:4438:1971 1:N:0:ATGCCT NCCCAACCAATCAGCCCCAATTTACGATCTATGTAACTCACCAGTTCGATATGCCAATAACCTGGCCTGAACCATGCAGTGCCTTGCAATTTCCTGTGGCA + #1=DDFFFHHHHHJJJJJJJJJJJIJJJJJJJJIJDGIJJJIJJIJIJJIJIIIIIIIGGICHIGHHHGHCFFFE66;@ACCCC@CCCAAC@CDCCCC?B? @HWI-1KL150:70:C74KBACXX:1:1101:4539:1970 1:N:0:ATGCCT NGCAGGCCGCGGACGGAGAGCACGTGAGGGAAGGGGAAGCCGCTCCGGCCTGCGTAGGGGGGGGGGCGGGGCCCCCCGGGACACCCGGGAGGGGGGCGGGN + #4=DDDDDHHDHAGGIDG:;=<FF8C@DDE4?CDEC6>?=;983;?8:&57?85)+0<()5-&&)))0-)&)0&))&&))&&((())&)0&05&0&&&05< @HWI-1KL150:70:C74KBACXX:1:1101:4702:1995 1:N:0:ATGCCT NGGGGTCCTCTGCGGCCAGGGCAGCGCTGCTCAGCATGATGAAGACAAGGATGAGGTTGGTGAAGATGTGGTGGTTGATGAGCTTGTGGGAGCCTACGCGN + #4=DDDFFHHHHHJJJJIHHIJIIJJJJJJJJIIIIJJJJJJJJJJJJHICHHEHFCDFD;?AAC@CCACD(8?',5(4:4>ACC+++(&2?&8((+&&)5 @HWI-1KL150:70:C74KBACXX:1:1101:6121:1971 1:N:0:ATGCCT NGTCCTCCCACCAGCCGGGCACTACTTACATGACGATGAGAGCAGCGTCTCGGGAGTAATCCAGCACAATCTCCTTCAGCCTCACCTGCCGAAGGGCCTGN + #1:BDFFFHHHHGIIIIGFBGIBEH@GECHCHGGGDFHIIIHICEGHGEHIHEH6?5;@3(>(.-(;(55>AC((,,55>?<<C??:9<A9&5)5<(2+(2 @HWI-1KL150:70:C74KBACXX:1:1101:6748:1978 1:N:0:ATGCCT NATCCCGTTGGCTTTCCAGGAGGCTCTGCAGCATCTGCAGGGTCCTGGGGTCCTGGTAAGGGGCTTCCAGGAGTGGAGAAGGGGGGCAGTGAGGTTGGGCC + #1=DFFFDHHHHHJJJJJJJJJJJJIJIJJIFIIIJJJJIJJGIHIIIIJGGIJJJAHIIJHHHFFFEDCE;@?B;5<ABDBBDDB@BB3@ACD?BD>B?( @HWI-1KL150:70:C74KBACXX:1:1101:6964:1994 1:N:0:ATGCCT NGTGGCAATTCTCTTCAGTAGGTTGGCCAAGTCAGCAGACACGGTGCTGGTCTTATAGCTGTCAAATTCAGGAAGGGTCTTGGGCTTAAAATACTCAAACA + #1=DDFFDFHHHHJJJJJHIJJHJJJJJJJJIJJJJJJJJJJJJGHIJJJFGIIIJJJIJIHIJICGGCGGEFHH;B;CC@CDDDDDDCCCEDCDCD:CC5 @HWI-1KL150:70:C74KBACXX:1:1101:8404:1977 1:N:0:ATGCCT NTTTTTCACTCCATTGTTGTTGTTTACCCAGTTTATGGGGGTTGTAATGTTTATCACACTCCTTGGATGATTTCCGAAGGTAAGATATCTGGAATGGTTTT + #4=DDFFFHHHHHGHHFHIFHIGIIIIJJJJBHGIIIIGGI?FGFEHJGGAHIJJJGIGGHHHHHFFFFDCC@CE;3;>@:@>C>ACA@CDCD<AC>ACA< @HWI-1KL150:70:C74KBACXX:1:1101:8836:1977 1:N:0:ATGCCT NGCTGTTTTACAAGTTGGTAGTTTTCTCTTCTTGGCATGGTGAACGTGCCCTAAAGGCCTGATGTCAGGCTCCATCCTCCATGTTAAAATAGTGAGTTCTT + #1=DFDFFHHHHHJJIJJCFHCFIJEHGIJIJJGHEGIIJFEGHDGEIIJHCGIJEGIJGEGHIGIIHIG<AECA7?DDFFFECC(>CC@CC>CCC>C@D: @HWI-1KL150:70:C74KBACXX:1:1101:9989:1970 1:N:0:ATGCCT NTTGTCAACTTTGCTTTTGCTCATGTTGTAATGTTTGGCAATATATGACACATCCACTTGTTTATCGAATCCCTGTCAAAAAGAAGAACAGCAAAAACATN + #1:B=DDDFFFHDGIIIIIGGIEGHBH@9<9FFFHGIIG>FGEGIGDDHG<:9?D@D8?>?FHGGGGGBAG)@;77;4?A;(9?@DFEEA>55=>=;'((, @HWI-1KL150:70:C74KBACXX:1:1101:10460:1982 1:N:0:ATGCCT NCCTATGCAACCTCAGTGTCCACTGAGAAGGGAATCTTGTGGTATGGAACAATGTGGCAAAAAGGTACAAAGTATTCTTACACCTGGAATTCTTAACCTGN

ffcf607a-7b70-4e16-a60b-c09197fa1601_2.fastq.gz
@HWI-1KL150:70:C74KBACXX:1:1101:1931:1994 2:N:0:ATGCCT ACAGCCTGCGGGGGGAATGTGACCAGGATATGCCTCAGCGTCCCAAGAGCGCTTACATGAGTGGGAGTTTGCCCCAACCCCGGGCTGAGGGCCAGAGCGGC + @CCFFFFFHGHHGID9@BCDEDDDDDBBCCC@@C@CDDABDDDBDDDD?90:BDDCDECDD@A?BDD<CCDCDABB@BDDDDBB>9>BDDDDD<B?<CBD9 @HWI-1KL150:70:C74KBACXX:1:1101:3880:1976 2:N:0:ATGCCT AAAGGGGAAAAAAATTACCAGATGACACACTTCCTGATTTCACTGTAGTAAGGAAAAAGTCAACATTGCAAATAAATACGATCCTTAGAGAGGAGAGCTGT + CCCFFFFFHHHHHJJJJJJJJIJIJJJJJJJJJJJJGIJJJJJJJGHGFHGFIIGJJJJFEHHHGFFFFDF@CCEEEEDDBDBDDCCCDDDDDBBB??CD+ @HWI-1KL150:70:C74KBACXX:1:1101:4185:1976 2:N:0:ATGCCT ACACTAGCCACTCACGTTCCATCTCTTCCTCGGAGAAATCCTCAGGCCCAGCCAAGGGCAGGAGCAAAAAGGGGAGAATGTCCACCTCAGGTCCAAGCAAC + CCCFFFFFHHHHHJJJHIIJJJIJJJJJJIJJIJJJJJJIJJJJJJIIJJFHIJIJJJIIHHGFFFFDDDDDDD@BBDDDDDDDDDDDDDD@CDD>CBBDA @HWI-1KL150:70:C74KBACXX:1:1101:4438:1971 2:N:0:ATGCCT GTTTGGAGAACCTGTGTGAAAATCCATACTTTAGCAATCTAAGGCAAAACATGAAAGACCTTATCCTACTTTTGGCCACAGTAGCTTCCAGTGTGCCGAAC + CCCFFFFFHHHHHJJJHIJJJJJJJIJJJIJJIJIJJJJJIIIIJJIJIIDDEIGGHJHGGGIIIGGDGHIJJJHHHHFFFCDECCCEECDCCDCCCD??3 @HWI-1KL150:70:C74KBACXX:1:1101:4539:1970 2:N:0:ATGCCT GGCCTCGTGCGCTCGGGCCCGCACGCCGTTGTTCGCGTCACCCCCACCCAGCTCCCTTCCGCGTGTGCTCGGAGGGCGCGGCGCACCGCCTACGCAGGCCN + CCCFFFFFHHHHHJJJJIJJJJJJJJJJIJHEHHFFDDBDDDDDDDDDDDDDDDDDDDDDDDDBDBDDDCDD;BBDDDDDDBD@BDDDD<<>CDDBBDDD> @HWI-1KL150:70:C74KBACXX:1:1101:4702:1995 2:N:0:ATGCCT CAAGCAGCGGCTTTTCCCTGCAGGATCCGCGTAGGCTGCCACAAGCTCATCAACCACCACATCTTCACCAACCTCATCCTTGTCTTCATCATGCTGAGCAN + CCCFFFFFHHHHHJJJIIIIJIJJJIJJIJJEHEIIIIDGHIGIAEHHHH?@DEFFDDDDDCDCDEDDD>B@BCDCCDDACDCDDDDEEE@CDDCCCBDD3 @HWI-1KL150:70:C74KBACXX:1:1101:6121:1971 2:N:0:ATGCCT AGACAGGAGACTCTATAAGAATTTATGAGGCAGCAGAGTCTACAAGTAAATCATGAATCCAGTTGAAAATGTTAATGAGGCCATAGACGTGGTGAAGGATT + @C@DFFFFHHHGHJJJJIIIHIIJJJJIGIJJJJIIIJGHIJIIIJIIIHDGGIJJIIJHDGEHGIIGCGGGGECEAEHHHFFFDDCECABB?@DCCC?A3 @HWI-1KL150:70:C74KBACXX:1:1101:6748:1978 2:N:0:ATGCCT GTGTGCAGCGGAGCCCTGCACGGGAGACAGGTCTGTCTTCTGCCAGATGGAAGTGCTCGATCGCTACTGCTCCATTCCCGGCTACCACCGGCTCTGCTGTN + CCCFFFFFHHHHHJJJJJJJJJJJJJJJJJJDHIIFHJJIIJIJJJJIIJEHHAEEHFFFECCBDDDCCDDDDDDFEEDDDDDDDDDDDDDDDDDCDDDA9 @HWI-1KL150:70:C74KBACXX:1:1101:6964:1994 2:N:0:ATGCCT GGTGGATCTTATATGGGAGGATGCACTGTTCATGTTTGAGTATTTTAAGCCCAAGACCCTTCCTGAATTTGACAGCTATAAGACCAGCACCGTGTCTGCTN + BC@FFFFFHHHGHJJJJIIJHIJJJJJJIJJJJJJJJJJJDHIJJJJJJJGHJJJJJIJIJIJJJJJJJJJIJGHHGHFFFFFFEEEDEDDDDDBBDDDD: @HWI-1KL150:70:C74KBACXX:1:1101:8404:1977 2:N:0:ATGCCT GAAAATAATTCACAAATAGTGTTACAGCTCCATCCACTGAAAATTGTCATAAAAGACATTTTTTCAATGAGTTCATTTTTAGAGAAACCATTCCAGATATC + @CCFFFFFHGHHHJJGHIJCJJIJJJJJJIHGIJJIJJIGIIIIIJIIHHDGGGJJIGIIJJJJGIEHIGJIHIJIJCHEECB@;?BCACECDCDCCCCD- @HWI-1KL150:70:C74KBACXX:1:1101:8836:1977 2:N:0:ATGCCT CACTTTGAAAACTAGAAATCATTACACAAAGTTAAGAACTCACTATTTTAACATGGAGGATGGAGCCTGACATCAGGCCTTTAGGGCACGTTCACCATGCC + CCCFFFFFHHHHHJJIJJJJJJJJJIJJJJJJJIIJJIJJIJIIIJJIJJHGCHIHGIIIJJGHIJJJJJJIJJGGHGHFFFFFDEEDDDDDDDDDDDDDA @HWI-1KL150:70:C74KBACXX:1:1101:9989:1970 2:N:0:ATGCCT AAGAACAAGTTTCTGTACATCTCATTATCATTCTGCCTGTTCACTTGCCTCATGTTTTTGCTGTTCTTCTTTTTGACAGGGATTCGATAAACAAGTGGATN + @@@DDEEDHDHHHEFCEH?FHIIIIHIIIIIGGIIIIIIFFIGGHIECEHBDHGBGCHIIIIIIIIIIIIHGGGE;CCHGHHCFFF@DCA6>CBC@CCCC> @HWI-1KL150:70:C74KBACXX:1:1101:10460:1982 2:N:0:ATGCCT TACATAGGAAGAAAATGCCAATCAAAAATGAAAGTCAGTTAAAACCACTTGAAAGCAATGTCTGTTCCTTTTTAGAATGGAAAGTTGGAGGAAACTTCAGC

As you can see, the file seems to be well formated.

I'm applying 2 different sets of filter:
`fastp -A -L -w 4 -j ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_report.json -h ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_report.html -W 20 -M 30 -i ffcf607a-7b70-4e16-a60b-c09197fa1601_1.fastq.gz -I ffcf607a-7b70-4e16-a60b-c09197fa1601_2.fastq.gz -o ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_1.fastq.gz -O fastq_filtered/ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_2.fastq.gz

fastp -A -L -w 4 -j ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_report.json -h ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_report.html -q 30 -u 50 -n 5 -i ffcf607a-7b70-4e16-a60b-c09197fa1601_1.fastq.gz -I input_files/ffcf607a-7b70-4e16-a60b-c09197fa1601_2.fastq.gz -o ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_1.fastq.gz -O ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_2.fastq.gz`

here is the log file:
`Read1 before filtering:
total reads: 69722014
total bases: 7041923414
Q20 bases: 6419265369(91.1578%)
Q30 bases: 5704731454(81.011%)

Read1 after filtering:
total reads: 67489520
total bases: 6816441520
Q20 bases: 6312681343(92.6096%)
Q30 bases: 5624025162(82.5068%)

Read2 before filtering:
total reads: 69722014
total bases: 4505725297
Q20 bases: 4505725297(100%)
Q30 bases: 4505725297(100%)

Read2 aftering filtering:
total reads: 67489520
total bases: 4360073359
Q20 bases: 4360073359(100%)
Q30 bases: 4360073359(100%)

Filtering result:
reads passed filter: 134979040
reads failed due to low quality: 4110650
reads failed due to too many N: 354338
reads failed due to too short: 0
reads with adapter trimmed: 0
bases trimmed due to adapters: 0

JSON report: ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_report.json
HTML report: ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_report.html

fastp -A -L -w 4 -j ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_report.json -h ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_report.html -W 20 -M 30 -i ffcf607a-7b70-4e16-a60b-c09197fa1601_1.fastq.gz -I ffcf607a-7b70-4e16-a60b-c09197fa1601_2.fastq.gz -o ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_1.fastq.gz -O ffcf607a-7b70-4e16-a60b-c09197fa1601_windows_W20M30_2.fastq.gz
fastp v0.6.0, time used: 910 seconds
Read1 before filtering:
total reads: 69722014
total bases: 7041923414
Q20 bases: 6419265369(91.1578%)
Q30 bases: 5704731454(81.011%)

Read1 after filtering:
total reads: 63210865
total bases: 6384297365
Q20 bases: 6031554512(94.4748%)
Q30 bases: 5446376816(85.3089%)

Read2 before filtering:
total reads: 69722014
total bases: 4505725297
Q20 bases: 4505725297(100%)
Q30 bases: 4505725297(100%)

Read2 aftering filtering:
total reads: 63210865
total bases: 4083554985
Q20 bases: 4083554985(100%)
Q30 bases: 4083554985(100%)

Filtering result:
reads passed filter: 126421730
reads failed due to low quality: 12684916
reads failed due to too many N: 337382
reads failed due to too short: 0
reads with adapter trimmed: 0
bases trimmed due to adapters: 0

JSON report: ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_report.json
HTML report: ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_report.html

fastp -A -L -w 4 -j ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_report.json -h ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_report.html -q 30 -u 50 -n 5 -i ffcf607a-7b70-4e16-a60b-c09197fa1601_1.fastq.gz -I ffcf607a-7b70-4e16-a60b-c09197fa1601_2.fastq.gz -o ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_1.fastq.gz -O ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_2.fastq.gz
fastp v0.6.0, time used: 918 seconds`

the result files look alike:
ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_1.fastq.gz
@HWI-1KL150:70:C74KBACXX:1:1101:1931:1994 1:N:0:ATGCCT NTCTTTCTGACCCTCACTGAGAGCGACCTGAAGGAAATTGGCATCACGTGCGTCCAGAAGGGCCGCTCTGGCCCTCAGCCCGGGGTTGGGGCAAACTCCCA + #1=DFFFFGHFHHIJEIJJIEIJGIIIGIIJFHJICEICGGICIII@GFHIGCHGAHECHFBF=A@?BDDDDDDD8<CCCDD<@BB<>BBBB9?<<ACCD( @HWI-1KL150:70:C74KBACXX:1:1101:4185:1976 1:N:0:ATGCCT NACTCCGGGGCTGCTCTGGACCAGTTTCCATTCCCGTCTCCCCACCCTCACCATCCCTCAGGACATCACGAGTGGTTGCTTGGACCTGAGGTGGACATTCT + #1=DFFFFHHHHHJJJJJJGIJIJFHIGIIIJIJJJGIIIJJJJIJJIJJJHHHHHFFFEECEDEDDDC@?@<BBCDDDCDDBCDDCCCDD>BB<A(4:>( @HWI-1KL150:70:C74KBACXX:1:1101:4438:1971 1:N:0:ATGCCT NCCCAACCAATCAGCCCCAATTTACGATCTATGTAACTCACCAGTTCGATATGCCAATAACCTGGCCTGAACCATGCAGTGCCTTGCAATTTCCTGTGGCA + #1=DDFFFHHHHHJJJJJJJJJJJIJJJJJJJJIJDGIJJJIJJIJIJJIJIIIIIIIGGICHIGHHHGHCFFFE66;@ACCCC@CCCAAC@CDCCCC?B? @HWI-1KL150:70:C74KBACXX:1:1101:4702:1995 1:N:0:ATGCCT NGGGGTCCTCTGCGGCCAGGGCAGCGCTGCTCAGCATGATGAAGACAAGGATGAGGTTGGTGAAGATGTGGTGGTTGATGAGCTTGTGGGAGCCTACGCGN + #4=DDDFFHHHHHJJJJIHHIJIIJJJJJJJJIIIIJJJJJJJJJJJJHICHHEHFCDFD;?AAC@CCACD(8?',5(4:4>ACC+++(&2?&8((+&&)5 @HWI-1KL150:70:C74KBACXX:1:1101:6121:1971 1:N:0:ATGCCT NGTCCTCCCACCAGCCGGGCACTACTTACATGACGATGAGAGCAGCGTCTCGGGAGTAATCCAGCACAATCTCCTTCAGCCTCACCTGCCGAAGGGCCTGN + #1:BDFFFHHHHGIIIIGFBGIBEH@GECHCHGGGDFHIIIHICEGHGEHIHEH6?5;@3(>(.-(;(55>AC((,,55>?<<C??:9<A9&5)5<(2+(2 @HWI-1KL150:70:C74KBACXX:1:1101:6748:1978 1:N:0:ATGCCT NATCCCGTTGGCTTTCCAGGAGGCTCTGCAGCATCTGCAGGGTCCTGGGGTCCTGGTAAGGGGCTTCCAGGAGTGGAGAAGGGGGGCAGTGAGGTTGGGCC + #1=DFFFDHHHHHJJJJJJJJJJJJIJIJJIFIIIJJJJIJJGIHIIIIJGGIJJJAHIIJHHHFFFEDCE;@?B;5<ABDBBDDB@BB3@ACD?BD>B?( @HWI-1KL150:70:C74KBACXX:1:1101:6964:1994 1:N:0:ATGCCT NGTGGCAATTCTCTTCAGTAGGTTGGCCAAGTCAGCAGACACGGTGCTGGTCTTATAGCTGTCAAATTCAGGAAGGGTCTTGGGCTTAAAATACTCAAACA + #1=DDFFDFHHHHJJJJJHIJJHJJJJJJJJIJJJJJJJJJJJJGHIJJJFGIIIJJJIJIHIJICGGCGGEFHH;B;CC@CDDDDDDCCCEDCDCD:CC5 @HWI-1KL150:70:C74KBACXX:1:1101:8404:1977 1:N:0:ATGCCT NTTTTTCACTCCATTGTTGTTGTTTACCCAGTTTATGGGGGTTGTAATGTTTATCACACTCCTTGGATGATTTCCGAAGGTAAGATATCTGGAATGGTTTT + #4=DDFFFHHHHHGHHFHIFHIGIIIIJJJJBHGIIIIGGI?FGFEHJGGAHIJJJGIGGHHHHHFFFFDCC@CE;3;>@:@>C>ACA@CDCD<AC>ACA< @HWI-1KL150:70:C74KBACXX:1:1101:8836:1977 1:N:0:ATGCCT NGCTGTTTTACAAGTTGGTAGTTTTCTCTTCTTGGCATGGTGAACGTGCCCTAAAGGCCTGATGTCAGGCTCCATCCTCCATGTTAAAATAGTGAGTTCTT + #1=DFDFFHHHHHJJIJJCFHCFIJEHGIJIJJGHEGIIJFEGHDGEIIJHCGIJEGIJGEGHIGIIHIG<AECA7?DDFFFECC(>CC@CC>CCC>C@D: @HWI-1KL150:70:C74KBACXX:1:1101:9989:1970 1:N:0:ATGCCT NTTGTCAACTTTGCTTTTGCTCATGTTGTAATGTTTGGCAATATATGACACATCCACTTGTTTATCGAATCCCTGTCAAAAAGAAGAACAGCAAAAACATN + #1:B=DDDFFFHDGIIIIIGGIEGHBH@9<9FFFHGIIG>FGEGIGDDHG<:9?D@D8?>?FHGGGGGBAG)@;77;4?A;(9?@DFEEA>55=>=;'((, @HWI-1KL150:70:C74KBACXX:1:1101:10460:1982 1:N:0:ATGCCT NCCTATGCAACCTCAGTGTCCACTGAGAAGGGAATCTTGTGGTATGGAACAATGTGGCAAAAAGGTACAAAGTATTCTTACACCTGGAATTCTTAACCTGN + #4BDFFFFHHHHHJJJIJJJJJJJJGJIJJJIFHGIJJJHIIFHIGIGJJGIIIIGGJJJJJGGG)=;CDHE=AC7ADEFFFFDDEE<CCA@DED;C@@BD @HWI-1KL150:70:C74KBACXX:1:1101:11860:1969 1:N:0:ATGCCT NACCTTGTCCTTGGCACTGCGGCAGCCTTGCAGGCTGGCAAGGATCTGGGCCTGCACACTCTGAACCCACAGCTCCCGCTCCTCCGCCGTTGAAGCCTCNN + #1=DDFFFHHHHHJJIJJJJJJJIJIJIJJIJJJIJJJEFHGI=CFGEGF2CCACEHHGBFDECAABB?@?ABC>58?BDDBACABBD>99?2@A:<A<0) @HWI-1KL150:70:C74KBACXX:1:1101:12222:1966 1:N:0:ATGCCT NAGCTTAAACAGTGGGTTTTTCAATGTCTCTCTTTAGGATTTTTGCTGGGTAAAAGCCTGTTTTACGCGTGGAATGCACACCTCCGGCCAACGGAGACTCC

ffcf607a-7b70-4e16-a60b-c09197fa1601_bases_q30u50n5_2.fastq.gz
@HWI-1KL150:70:C74KBACXX:1:1101:1931:1994 2:N:0:ATGCCT ACAGCCTGCGGGGGGAATGTGACCAGGATATGCCTCAGCGTCCCAAGAGCGCTTACATGAGTGGGAGTTTGCCCCAACCCCGGGCTGAGGGCCAGAGCGGC + KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK + CCCFFFFFHHHHHJJJJJJJJIJIJJJJJJJJJJJJGIJJJJJJJGHGFHGFIIGJJJJFEHHHGFFFFDF@CCEEEEDDBDBDDCCCDDDDDBBB??CD+ @HWI-1KL150:70:C74KBACXX:1:1101:4185:1976 2:N:0:ATGCCT KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK ACACTAGCCACTCACGTTCCATCTCTTCCTCGGAGAAATCCTCAGGCCCAGCCAAGGGCAGGAGCAAAAAGGGGAGAATGTCCACCTCAGGTCCAAGCAAC + CCCFFFFFHHHHHJJJHIIJJJIJJJJJJIJJIJJJJJJIJJJJJJIIJJFHIJIJJJIIHHGFFFFDDDDDDD@BBDDDDDDDDDDDDDD@CDD>CBBDA K CCCFFFFFHHHHHJJJHIJJJJJJJIJJJIJJIJIJJJJJIIIIJJIJIIDDEIGGHJHGGGIIIGGDGHIJJJHHHHFFFCDECCCEECDCCDCCCD??3 @HWI-1KL150:70:C74KBACXX:1:1101:4539:1970 2:N:0:ATGCCT GGCCTCGTGCGCTCGGGCCCGCACGCCGTTGTTCGCGTCACCCCCACCCAGCTCCCTTCCGCGTGTGCTCGGAGGGCGCGGCGCACCGCCTACGCAGGCCN KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK + CCCFFFFFHHHHHJJJJIJJJJJJJJJJIJHEHHFFDDBDDDDDDDDDDDDDDDDDDDDDDDDBDBDDDCDD;BBDDDDDDBD@BDDDD<<>CDDBBDDD> @HWI-1KL150:70:C74KBACXX:1:1101:4702:1995 2:N:0:ATGCCT KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK CAAGCAGCGGCTTTTCCCTGCAGGATCCGCGTAGGCTGCCACAAGCTCATCAACCACCACATCTTCACCAACCTCATCCTTGTCTTCATCATGCTGAGCAN + CCCFFFFFHHHHHJJJIIIIJIJJJIJJIJJEHEIIIIDGHIGIAEHHHH?@DEFFDDDDDCDCDEDDD>B@BCDCCDDACDCDDDDEEE@CDDCCCBDD3 K @HWI-1KL150:70:C74KBACXX:1:1101:6121:1971 2:N:0:ATGCCT AGACAGGAGACTCTATAAGAATTTATGAGGCAGCAGAGTCTACAAGTAAATCATGAATCCAGTTGAAAATGTTAATGAGGCCATAGACGTGGTGAAGGATT + KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK @C@DFFFFHHHGHJJJJIIIHIIJJJJIGIJJJJIIIJGHIJIIIJIIIHDGGIJJIIJHDGEHGIIGCGGGGECEAEHHHFFFDDCECABB?@DCCC?A3 @HWI-1KL150:70:C74KBACXX:1:1101:6748:1978 2:N:0:ATGCCT GTGTGCAGCGGAGCCCTGCACGGGAGACAGGTCTGTCTTCTGCCAGATGGAAGTGCTCGATCGCTACTGCTCCATTCCCGGCTACCACCGGCTCTGCTGTN KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK + CCCFFFFFHHHHHJJJJJJJJJJJJJJJJJJDHIIFHJJIIJIJJJJIIJEHHAEEHFFFECCBDDDCCDDDDDDFEEDDDDDDDDDDDDDDDDDCDDDA9 @HWI-1KL150:70:C74KBACXX:1:1101:6964:1994 2:N:0:ATGCCT KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK GGTGGATCTTATATGGGAGGATGCACTGTTCATGTTTGAGTATTTTAAGCCCAAGACCCTTCCTGAATTTGACAGCTATAAGACCAGCACCGTGTCTGCTN + BC@FFFFFHHHGHJJJJIIJHIJJJJJJIJJJJJJJJJJJDHIJJJJJJJGHJJJJJIJIJIJJJJJJJJJIJGHHGHFFFFFFEEEDEDDDDDBBDDDD: K @HWI-1KL150:70:C74KBACXX:1:1101:8404:1977 2:N:0:ATGCCT GAAAATAATTCACAAATAGTGTTACAGCTCCATCCACTGAAAATTGTCATAAAAGACATTTTTTCAATGAGTTCATTTTTAGAGAAACCATTCCAGATATC + KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK @CCFFFFFHGHHHJJGHIJCJJIJJJJJJIHGIJJIJJIGIIIIIJIIHHDGGGJJIGIIJJJJGIEHIGJIHIJIJCHEECB@;?BCACECDCDCCCCD- @HWI-1KL150:70:C74KBACXX:1:1101:8836:1977 2:N:0:ATGCCT CACTTTGAAAACTAGAAATCATTACACAAAGTTAAGAACTCACTATTTTAACATGGAGGATGGAGCCTGACATCAGGCCTTTAGGGCACGTTCACCATGCC KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK + CCCFFFFFHHHHHJJIJJJJJJJJJIJJJJJJJIIJJIJJIJIIIJJIJJHGCHIHGIIIJJGHIJJJJJJIJJGGHGHFFFFFDEEDDDDDDDDDDDDDA

The other files look alike (file_1 is normal, file_2 has some extra "KKKKKK" lines...)

When trying to align data on genome with BWA MEM, only 3 sequences seems to be well formated in my file.
This is probably due to the non canonical format of my fastq reads with the extra lines.

Do you have any idea why this doesn't work?

Read names do not match with dual UMIs

Thank you for making your wonderful tool!

For dual-UMI experiments, there may/should be different UMI tags on the forward and reverse read of a pair. Is there an option (now or in development) to remove the UMI tags from each read and place them on both of the resultant reads? Downstream tools require that the read names be the same so if there are different UMI tags on the forward and reverse of a pair, it will fail. Instead it should have the read name, followed by a delimiter between the forward and reverse UMI tags.

For instance, in fastq_1.fq.gz
read_1_name:etc:etc:etc:etc:etc:etc:read_1_tagread_2_tag

And in the pair, fastq_2.fq.gz
read_2_name:etc:etc:etc:etc:etc:etc:read_1_tagread_2_tag

Content of rRNA

HI, if I want know the content of rRNA, fastp ,can it be calculated?

Average read length pre- and post-trimming

This is a great tool. Adding in the pre- and post-trimming average read length would be super helpful. Best to get all information with one pass through the read file(s) than two.

Thanks!

PolyG trimming?

Hello,

Thanks for the program. I'm working with NovaSeq data currently and would like to try out the polyG trimming. After trimming, it looks like fastp still retains reads with 8 or less Gs at the ends of reads. Is that a default set by fastp and what is the reason for doing so? Any way I can change the number of G's fastp lets through its filter?

Cheers,
Mun

--umi_prefix requires uppercase

This parameter seems to require uppercase letters. For instance:

$ fastp -i test-umi_1.fastq.gz -I test-umi_2.fastq.gz -o test-r1-out.fastq -O test-r2-out.fastq -U --umi_loc=read1 --umi_len=8 --umi_prefix=mbc
ERROR: UMI prefix can only have characters and numbers, but the given is: mbc

But, uppercase MBC works fine

About adapter trimmer

Hi ,I have saw the source of adaptertrimmer , the principle is to allow the maximum number of diff to find the largest PE reads overlap sequence, the rest as an adapter to remove. But there is a problem is to compare PE reads base. If the same time, if R1 has a wrong indel, then cause the following bases are different with R2, resulting in diff the number increases, so overlap length decreases, the adapter can not be removed.
So counld you add a argument to consider wrong indel in next version.

FastUniq-like function

could you add a function that can deduplicate the reads for de novo analysis, becuase it seems that only FastUinq can do such work

%GC content

Hi,

would it be possible to add a calculated %GC content on the base contents graphs near the base index (top right) ?

Thanks

ld returned 1 exit status while make

fastqreader.cpp:32: undefined reference to gzoffset' ./obj/peprocessor.o: In function PairEndProcessor::initOutput()':
peprocessor.cpp:35: undefined reference to gzbuffer' peprocessor.cpp:38: undefined reference to gzbuffer'
collect2: error: ld returned 1 exit status
make: *** [fastp] Error 1

poly-A before poly-G in NextSeq reads

Hi,
I am seeing a large portion of NextSeq reads that have the poly-G tail, and have successfully trimmed that off with fastp. Most sequences also have a poly-A run just before the poly-G tail, which is apparently due to reduction in signal strength (lower quality) from clusters before they fail altogether (see https://sequencing.qcfail.com/articles/illumina-2-colour-chemistry-can-overcall-high-confidence-g-bases/). I tried to use the -x function in the same trimming command, but it doesn't work (presumably because it isn't at the end of the original read). I suppose I could try it as a second trimming process, but wanted to know if this is a common issue that people with poly-G reads from NextSeq data see, and if so, could it be incorporated into the options.
The data look like this after trimming:
@NS500704:337:HGG2HBGX3:1:11101:10170:2997 1:N:0:GACGAGG+CGGAAT
GCAAGGTCTTAATCAAATTTTGTCAGCTGCAAGATCGAAGAGCACACGTCTGAACTCCAGTCACGACGAGGATCTCGTATGCCGTCTTCTGCGTGAAAAAAAAAA
+
AAAAAEEEEEAEEEE6EEAEEEEEEEEEEEEEEEEEEE/EEEEEEEEAEEEEE/EEEEEEEEEEEAEE/EEEA</AE/AEE<E</6<E//EE/EE/<AAEEEEA/
@NS500704:337:HGG2HBGX3:1:11101:8528:3344 1:N:0:GACGAGG+CGGAAT
ACAGAAACAGGTGCACAGTTCCCCATCAAGATCGGAAGACACACGTCTGAACTCCAGTCACGACGAGGCTCTCGTATGCCGTCTTCTGCATGAAAAAAAAAA
+
AAAAAAEEEEEE6E/AEEE/EEEEEEEEEE/EEEEEEEEE/EAEEEEEE/AE/EAE/AEEEEEEA/EE/AE<///A<E<E//<EE//////A/EEEEEE///

Thanks,
Phil Morin ([email protected])

JSON and HTML reports are not generated if a directory is given to --json and --html options

It's not really a bug but JSON and HTML reports are not generated if a directory is given to --json and --html options.

I wanted to generate JSON and HTML reports into a specific directory with by default names.
I just gave a directory to --json and --html options but it did not generate the reports.

Maybe it's possible to create --output_directory option (current directory by default) so that we just have to change names with --out1, --out2, --json and --html options. Just an idea 🙂 !

why "--cut_by_quality3" reduce reads counts

hi sfchen:

 I want to known why "-3" option will reduce  reads passed filters.

fastp -i SRR1770413_1.fastq -I SRR1770413_2.fastq -q 20 -u 20 -o out.SRR1770413_1.fastq -O out.SRR1770413_2.fastq
image

fastp -i SRR1770413_1.fastq -I SRR1770413_2.fastq -q 20 -u 20 -3 -o out.SRR1770413_1.fastq -O out.SRR1770413_2.fastq
image

Add -v/--version option

Could you please add a -v or --version argument to fastp that outputs the full version of the program? Currently, there is no way to know which version of the program is being run. Thanks!

Sort overrepresented sequences by count

Hi,

In the HTML report would it be possible to output the overrepresented sequences sorted by the count?

As currently high counts can be distributed through the table making them harder to see, see the html example:

screen shot 2018-03-11 at 7 29 39 am

Stitch together overlapping reads?

When the DNA library is overly short, Is it possible that most reads overlap.

Can fastp stitch these reads together (instead of just correcting errors) ?

So input R1, R2 would produce output R1, R2 and SR (stitched, longer single end reads)

question : activate/desactivate filter types

Hi,

I love your tool and I have a question:

  • when using the -W and -M options, is it possible to disable the -q, -u and -n options? I tried a -W -M -Q line, but all the options were disables, including the -W and -Q... (I want to use per the "read cutting by quality options" and disable the "quality filtering options" in the same time)

read2 -O "is a folder, not a file, quit now"

Hi all,

Trying to run fastp on a PE150 sample.

Here is the exact line I'm running:
fastp -i Emx1_1_11_CTRL_USPD16084012-4_HHG33BBXX_L6_1.fq.gz -I Emx1_1_11_CTRL_USPD16084012-4_HHG33BBXX_L6_2.fq.gz -o r1.fq.gz -O r2.fq.gz

Here is the error I get:
ERROR: 'r2.fq.gz' is a folder, not a file, quit now

I have no problems running fastp on either of these fastq files in single-end mode. Also tried using --out2 instead of -O, I get the same result.

Any idea how I can get this to run?

Best,
David

overtrimming of reads

I am benchmarking fastp against other read trimmers using the workflow I developed for the Atropos paper (https://github.com/jdidion/atropos/tree/master/paper/workflow). I find that fastp has a high rate of read overtrimming. Example fastq input and output are attached. The command I used is:

fastp
-i {fastq1} -I {fastq2} -o {prefix}.1.fq.gz -O {prefix}.2.fq.gz
--adapter_sequence {adapter1} --adapter_sequence_r2 {adapter2}
--thread {threads} --length_required 25 —disable_quality_filtering

Nearly all of these overtrimming events involve the spurious removal of up to 10 bases from one or both reads:

image

I suspect this might be due to overzealous alignment of the reads to each other, and could probably be fixed with an option to require a minimum insert overlap before trimming. Another approach (which is offered as an option in Atropos) is to compute the random match probability of each alignment and compare against a user-specified threshold value.

example.zip

BAM support

Hello,

We are using unmapped bam files for storing and archiving our sequence data. It would be really nice if fastp can take bam files as input and output fastq/bam files.

Best,
Bekir

adapter trimming with respect to quality

Hello,

First, let me just say I have been working with fastp for over a month now and am very pleased with the performance and direction the tool is going. It also appears to be quite accurate and look forward to the forthcoming publication.

However, I have noticed that the adapter trimming does not efficiently trim under certain conditions where quality dips but there is still an exact match to the adapters on both read pairs. In this case I've synthetic data that is centered at 35Q and 20Q.

I was going through a tool evaluation comparison using this data and found that fastp excels in most cases with >99% sensitivity trimming and near perfect specificity. However, as the qualities drop near 20Q the sensitivity also drops dramatically. Some tools have no loss of adapter clipping with respect to quality shifting. Note my usage and that I am not doing any quality trimming.

Any ideas on how we can better address the below scenario?

Here is one example of where clipping fails, adapter starts at pos 114. You can see that fastp successfully trims in one case but not the other. PS- I have more examples if needed.

Adapters:
> TruSeq_Index_Adapter_5p
GATCGGAAGAGCACACGTCTGAACTCCAGTCAC
> TruSeq_Index_Adapter_3p
ATCTCGTATGCCGTCTTCTGCTTG

fastp -Q -i 1m_150bp_35q_R1.fq.gz -I 1m_150bp_35q_R2.fq.gz -o fastp.1m_150bp_35q_R1.fastq.gz -O fastp.1m_150bp_35q_R2.fastq.gz

fastp': fastp -Q -i 1m_150bp_20q_R1.fq.gz -I 1m_150bp_20q_R2.fq.gz -o fastp.1m_150bp_20q_R1.fastq.gz -O fastp.1m_150bp_20q_R2.fastq.gz 

Post-trimming results

Read1_35q:
@999465_150_114 1:
AGTCTCAGGATACAAAATCAATGTACAAAAATCACAAGCATTCTTATACACCAATAACAGACAAACAGAGAGCCAAACCATGAATGAACTCCCATTCACAATTGCTTCAAAGAG
+
GJJJ?JAJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJIJJJ>JJJJJJJJJJJJJGJJJJJJJJJJJJJJEJHJJJJHJJJJJJJJJHJJJJJCJCJJEJJJJJJJJJF

Read2_35q:
@999465_150_114 2:
CTCTTTGAAGCAATTGTGAATGGGAGTTCATTCATGGTTTGGCTCTCTGTTTGTCTGTTATTGGTGTATAAGAATGCTTGTGATTTTTGTACATTGATTTTGTATCCTGAGACT
+
GGJJJJJJJJJJJFJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJHJJJJFJJJGJJJGJJJJJ?JJJJJJJJJGJCJJJJJJJIJJJ?J?JJJGJJBJJJGJJJJJJJ

Read1_20q:
@999465_150_114 1:
AGTCTCAGGACACAAAATCAATGTACAAAAATCACAAGCATTCTTTTACACCAATAACAGACAAACAGAGAGCCAAACCATGAATGAACTCCCATTCACAATTGCTTCAAAGAGGATCGGAAGAGCACACGTCTGAACTCCAGTCACGGC
+
0...4689..17428;:=5:383<=?<755199;6;6367>509329;1<4.<553816619:2<0.36:.6663.;.75:6.5:7.:9..0665/25..:22.0:1.18=22.12.8799844.2.549/5.7..94/3...0../...

Read2_20q:
@999465_150_114 2:
CTCTTTGAAACAATTGTGAATGGGAGTTCATTCATGGTTTGGCTCTCTGTTTGTCTGTTATTGGTGTATAAGAATGCTTGTGATTTTTGTACATTGATTTTGTATCCTCAGACTATCTCGTATGCCGTCTTCTGCTTGCAACATTCACCA
+
-6--1--/;38<576-62<87<:6/6:3<--4831>3-;--32<;2-<57999.</6864<23.-67-007-84<858485.0..534/24.4/-16?:9-7:6;0;3/2-46/21732;0--2463-5/8:57403---.3-6------

Trimming for polyA/T/C; improved runtimes for downstream alignment and variant calling

Shifu;
Congrats on the paper in bioRxiv and thanks for all the great work on fastp. We've been working on improving the runtimes for somatic variant calling workflows and exploring quality and polyX trimming. We did a test run with fastp and atropos and found that the major improvements in runtime were due to removal of polyX sequences at the 3' ends of reads:

https://github.com/bcbio/bcbio_validations/tree/master/somatic_trim

We'd used the new polyG trimming functionality (thank you), but a crude method of 3' polyA/T/C adapter removal, which appears to be less effective with fastp compared to atropos trimming. When additional polyX stretches get removed we get much better runtimes for alignment and variant calling.

I saw general polyX and low complexity trimming are on the roadmap for fastp and would like to express my support for this. We've been making great use of fastp for adapter conversion and would like to offer trimming as part of an effort to speed up alignment and variant calling both on NovaSeqs and more generally.

As a secondary help for integration, is streaming trimming a possibility for paired ends? To help improve preparation runtimes I'd been thinking of including trimming and streaming directly into bwa/minimap2 alignment, or being able to stream outputs into bgzip so we can index and parallelize variant calling.

Thanks again for all the work on fastp.

Low-quality base trimming from leading and trailing side

Hello,
I have a question about the base trimming of fastp. Does it have an option for low-quality base trimming from both sides like in trimmomatic (LEADING and TRAILING options)? I would like to trim bad bases (Q<20) for both sides of the reads.

The "--cut_by_quality5/3" in fastp and "Slidingwindow" option in trimmomatic seem quite different. The former would trim the reads in the window if with low-quality, while for the latter one if the low-quality bases are in the begining of the read, the whole read will the removed.

For example:
Input fq file: Test.fq (The last 4 bases for read 1 and first 4 bases for read 2 are bad ones)
@1\1
AATGATCGTAGCGATGCAAGCTAGCCCGATGCCCGATCGCATCG
+
eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeEFCB
@2\1
AATGATCGTAGCGATGCAAGCTAGCCCGATGCCCGATCGCATCG
+
EFCBeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

##If run with LEADING and TRAILING options in Trimmomatic (Exactly what I want, the bad bases are removed)

java -jar $Trimmomatic SE -phred64 Test.fq tt.fq LEADING:20 TRAILING:20
@1\1
AATGATCGTAGCGATGCAAGCTAGCCCGATGCCCGATCGC

eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
@2\1
ATCGTAGCGATGCAAGCTAGCCCGATGCCCGATCGCATCG
+
eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

##If run with Slidingwindow option in Trimmomatic (The 2nd read will be removed totally, as the bad bases are in the begining of the read)

java -jar $Trimmomatic SE -phred64 Test.fq tt.fq SLIDINGWINDOW:4:20
@1\1
AATGATCGTAGCGATGCAAGCTAGCCCGATGCCCGATCGC

eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

If run with "--cut_by_quality5/3" in fastp (it is wierd that only the last 2 bases for read 1 and first 2 bases for read 2was clipped off)

fastp --phred64 --in1 Test.fq --out1 Test_Trimed.fq --cut_by_quality5 --cut_by_quality3 --cut_window_size 4 --cut_mean_quality 20
@1\1
AATGATCGTAGCGATGCAAGCTAGCCCGATGCCCGATCGCAT

FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF&'
@2\1
GATCGTAGCGATGCAAGCTAGCCCGATGCCCGATCGCATCG
+
#FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

Thanks!

Best,
Wenyu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.