Git Product home page Git Product logo

shogun's Introduction

DOI

Shallow shotgun sequencing

Shallow seq pipeline for optimal shotgun data usage

alt-tag

Schematic overview of the shallow-shotgun computational pipeline SHOGUN. For every step in the SHOGUN pipeline, the user must supply the pre-formatted SHOGUN database folder. To run every step shown here in a single command, the user can select the pipeline subcommand. Otherwise, the analysis modules can be run independently.

a. filter - The input quality-controlled reads are aligned against the contaminate database using BURST to filter out all reads that hit human associated genome content.

b. align - The input contaminate filtered reads are aligned against the reference database. The user has the option to select one or all of the three alignment tools BURST, Bowtie2, or UTree.

c. assign_taxonomy - Given the data artifacts from a SHOGUN alignment tool, output a Biological Observation Matrix (BIOM) format taxatable with the rows being rank-flexible taxonomies, the columns are samples, and the entries are counts for each given taxonomy per sample. The alignment tool BURST has two run modes, taxonomy and capitalist. If the capitalist mode is enabled, a rank-specific BIOM file is output instead.

d. coverage - The output from BURST can be utilized to analyze the coverage of each taxonomy across all samples in your alignment file. This can useful for reducing the number of false positive taxonomies.

e. redistribute - The rank-flexible taxatable is summarized into a rank-specific taxatable. This summarizes both up and down the taxonomic tree.

f. normalize - Each sample in the taxatable is normalized to the median depth of all the samples.

Installation

These installation instructions are streamlined for Linux systems at this time. The tool SHOGUN is installable on Windows and macOS manually via the development installation. This package requires anaconda, which is a system agnostic package and virtual environment manager. Follow the installation instructions for your system at http://conda.pydata.org/miniconda.html.

The CONDA way (personal install)

  1. Follow steps 1 and 2 of https://bioconda.github.io/ (including installing MiniConda 3.6 if you don't have miniconda)
  2. Do this in a terminal:
conda create -n shogun -c knights-lab shogun
source activate shogun

Development Installation

  1. Do this in a terminal:
conda create -n shogun -c knights-lab shogun
source activate shogun
  1. Remove SHOGUN and install via the github master branch. This will keep all the conda dependencies installed.
conda uninstall shogun
pip install git+https://github.com/knights-lab/SHOGUN.git --no-cache-dir --upgrade

Optional: You can reinstall to the newest git version of SHOGUN at anytime via the command:

pip install git+https://github.com/knights-lab/SHOGUN.git --no-cache-dir --upgrade

Testing your installation

For testing, we are currently using the built in python unittests. In order to run the test suite, change directory into the root folder of the repository. Then run:

python -m unittest discover shogun

Documentation

SHOGUN help for Command-Line

SHOGUN is a command line application. It is meant to be run with a single command. The helpful for the command is below.

Usage: shogun [OPTIONS] COMMAND [ARGS]...

  SHOGUN command-line interface

  --------------------------------------

Options:
  --log [debug|info|warning|critical]
                                  The log level to record.
  --shell / --no-shell            Use the shell for Python subcommands (not
                                  recommended).
  --version                       Show the version and exit.
  -h, --help                      Show this message and exit.

Commands:
  align                 Run a SHOGUN alignment algorithm.
  assign_taxonomy       Run the SHOGUN taxonomic profile algorithm on an...
  convert               Normalize a taxonomic profile using relative...
  coverage              Show confidence of coverage of microbes, must a be...
  filter                Filter out contaminate reads.
  functional            Run the SHOGUN functional algorithm on a taxonomic...
  normalize             Normalize a taxonomic profile by median depth.
  pipeline              Run the SHOGUN pipeline, including taxonomic and...
  redistribute          Run the SHOGUN redistribution algorithm on a...
  summarize_functional  Run the SHOGUN functional algorithm on a taxonomic...

align

The command align runs the respective taxonomic aligner on a linearized, demultiplexed FASTA using either burst, bowtie2, or utree.

Usage: shogun align [OPTIONS]

  Run a SHOGUN alignment algorithm.

Options:
  -a, --aligner [all|bowtie2|burst|utree]
                                  The aligner to use.  [default: burst]
  -i, --input PATH                The file containing the combined seqs.
                                  [required]
  -d, --database PATH             The path to the database folder.
  -o, --output PATH               The output folder directory  [default: /mnt/
                                  c/Users/bhill/code/SHOGUN/results-170828]
  -t, --threads INTEGER           Number of threads to use.
  -h, --help                      Show this message and exit.

assign_taxonomy

Usage: shogun assign_taxonomy [OPTIONS]

  Run the SHOGUN taxonomic profile algorithm on an alignment output.

Options:
  -a, --aligner [bowtie2|burst|burst-tax|utree]
                                  The aligner to use.  [default: burst]
  -i, --input PATH                The output alignment file.
                                  [required]
  -d, --database PATH             The path to the database folder.
  -o, --output PATH               The coverage table.  [default: /mnt/c/Users/
                                  bhill/code/SHOGUN/taxatable-170828.txt]
  -h, --help                      Show this message and exit.

coverage

Usage: shogun coverage [OPTIONS]

  Show confidence of coverage of microbes.

Options:
  -i, --input PATH                The output BURST alignment.
                                  [required]
  -d, --database PATH             The path to the folder containing the
                                   database.  [required]
  -o, --output PATH               The coverage table.  [default: /mnt/c/Users/
                                  bhill/code/SHOGUN/coverage-170828.txt]
  -l, --level [genus|species|strain]
                                  The level to collapse to.
  -h, --help                      Show this message and exit.

filter

This command will filter contaminate reads from the combined sequences fna. Typically, this is done for removing human reads from WGS data. This is done by aligning the reads to a contiminate only database, and splitting out the reads that aligned.

Usage: shogun filter [OPTIONS]

 Filter out contaminate reads.

Options:
 -i, --input PATH         The file containing the combined seqs.  [required]
 -d, --database PATH      The path to the database folder.
 -o, --output PATH        The output folder directory  [default:
                          /home/bhillmann/results-200302]
 -t, --threads INTEGER    Number of threads to use.
 -p, --percent_id FLOAT   The percent id to align to.  [default: 0.98]
 -a, --alignment BOOLEAN  Run alignment. If FALSE then alignment files must
                          be named <output_folder>/alignment.filter.b6.
                          [default: True]
 -h, --help               Show this message and exit.

functional

This command assigns function at a certain taxonomic level. Lower level KEGG IDs are assigned to higher level KEGG IDs through plurality voting. Note that plasmids are not included the KEGG ID annotation.

Usage: shogun functional [OPTIONS]

  Run the SHOGUN functional algorithm on a taxonomic profile.

Options:
  -i, --input PATH                The taxatable.  [required]
  -d, --database PATH             The path to the folder containing the
                                  function database.  [required]
  -o, --output PATH               The output file  [default: /mnt/c/Users/bhil
                                  l/code/SHOGUN/results-170828]
  -l, --level [genus|species|strain]
                                  The level to collapse to.
  -h, --help                      Show this message and exit.

normalize

Usage: shogun normalize [OPTIONS]

  Normalize a taxonomic profile by median depth.

Options:
  -i, --input PATH   The output taxatable.  [required]
  -o, --output PATH  The taxatable output normalized by median depth.
                     [default: /mnt/c/Users/bhill/code/SHOGUN/taxatable.normal
                     ized-170828.txt]
  -h, --help         Show this message and exit.

pipeline

Usage: shogun pipeline [OPTIONS]

  Run the SHOGUN pipeline, including taxonomic and functional profiling.

Options:
  -a, --aligner [all|bowtie2|burst|utree]
                                  The aligner to use [Note: default burst is
                                  capitalist, use burst-tax if you want to
                                  redistribute].  [default: burst]
  -i, --input PATH                The file containing the combined seqs.
                                  [required]
  -d, --database PATH             The path to the database folder.
  -o, --output PATH               The output folder directory  [default: /mnt/
                                  c/Users/bhill/code/SHOGUN/results-170828]
  -l, --level [kingdom|phylum|class|order|family|genus|species|strain|all|off]
                                  The level to collapse taxatables and
                                  functions to (not required, can specify
                                  off).
  --function / --no-function      Run functional algorithms. **This will
                                  normalize the taxatable by median depth.
  --capitalist / --no-capitalist  Run capitalist with burst post-align or not.
  -t, --threads INTEGER           Number of threads to use.
  -h, --help                      Show this message and exit.

redistribute

This command redistributes the reads at a certain taxonomic level. This assumes that you have a BIOM txt file output from SHOGUN align, or even a summarized table from redistribute at a lower level.

Usage: shogun redistribute [OPTIONS]

  Run the SHOGUN redistribution algorithm on a taxonomic profile.

Options:
  -i, --input PATH                The taxatable.  [required]
  -d, --database PATH             The path to the database folder.  [required]
  -l, --level [kingdom|phylum|class|order|family|genus|species|strain|all]
                                  The level to collapse to.
  -o, --output PATH               The output file  [default: /mnt/c/Users/bhil
                                  l/code/SHOGUN/taxatable-170828.txt]
  -h, --help                      Show this message and exit.

summarize_functional

This command will take in a kegg table and output a summarized KEGG pathway and module table.

Usage: shogun summarize_functional [OPTIONS]

  Run the SHOGUN functional algorithm on a taxonomic profile.

Options:
  -i, --input PATH     The taxatable.  [required]
  -d, --database PATH  The path to the folder containing the database.
                       [required]
  -o, --output PATH    The output file  [default:
                       /home/grad00/hillm096/results-171106]
  -h, --help           Show this message and exit.

Database creation.

To create a BURST database for SHOGUN, follow instructions on the BURST github page to create an acx and edx file with the same base filename, then create a file called "metadata.yaml" in the same folder, with an entry burst: <basename>, as in this example: https://github.com/knights-lab/SHOGUN/blob/master/shogun/tests/data/metadata.yaml

You will need a taxonomy file formatted as in the genomes.small.tax file here to provide taxonomy. Add an entry to the yaml file with a key general: and a sub-key taxonomy: <taxonomy file name>. A bowtie2 database base filename and Utree database filename may be added as follows:

general:
  taxonomy: genomes.small.tax
  fasta: genomes.small.fna
  shear: sheared_bayes.fixed.txt
function: function/ko
burst: burst/genomes.small
bowtie2: bowtie2/genomes.small
utree: utree/genomes.small

A functional database is optional. Examples are shown here.

All database files for BURST, Bowtie2, and Utree should be in the same parent folder. Once the folder is created and the metadata.yaml file is populated as in the above example, the new database may be used in SHOGUN as follows:

shogun pipeline -i input.fna -d /path/to/database/parent/folder/ -o output -m burst
shogun pipeline -i input.fna -d /path/to/database/parent/folder/ -o output -m utree
shogun pipeline -i input.fna -d /path/to/database/parent/folder/ -o output -m bowtie2

Pre-built database files can be downloaded by running the following command:

wget -i https://raw.githubusercontent.com/knights-lab/SHOGUN/master/docs/shogun_db_links.txt

shogun's People

Contributors

bhillmann avatar danknights avatar gregcaporaso avatar nbokulich avatar qiyunzhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shogun's Issues

Additional functional databases

Is it possible to include data from eg. MetaCyc in place of KEGG? Is it simple a matter of providing user formatted mapping files?

Error Running UTREE with SHOGUN-BUGBASE

I'm trying to run shogun_bugbase with the command

shogun_bugbase -i /panfs/roc/scratch/staleyc/byron/ssg/ -o /panfs/roc/scratch/staleyc/byron/ssg/shogun/ -u /home/sadowsky/staleyc/shogun_bugbase_db

Everything seems to have installed properly and is extracted. I think all of the paths are linked in the .bashrc. I have R ver 3.3.1 loaded and all of the packages (except grid) installed. I get the following error

Traceback (most recent call last):
File "/home/sadowsky/staleyc/miniconda3/envs/shogun/bin/shogun_bugbase", line 11, in
load_entry_point('ninja-shogun==0.0.1.dev0', 'console_scripts', 'shogun_bugbase')()
File "/home/sadowsky/staleyc/miniconda3/envs/shogun/lib/python3.5/site-packages/click/core.py", line 716, in call
return self.main(*args, **kwargs)
File "/home/sadowsky/staleyc/miniconda3/envs/shogun/lib/python3.5/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/home/sadowsky/staleyc/miniconda3/envs/shogun/lib/python3.5/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/sadowsky/staleyc/miniconda3/envs/shogun/lib/python3.5/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/home/sadowsky/staleyc/miniconda3/envs/shogun/lib/python3.5/site-packages/ninja_shogun/scripts/shogun_bugbase.py", line 51, in shogun_bugbase
with open(utree_tsv) as inf:
FileNotFoundError: [Errno 2] No such file or directory: '/panfs/roc/scratch/staleyc/byron/ssg/017.baseline.S12.utree.tsv'

I tried installing UTREE again using the compilation instructions with the shogun environment up and also copied the UTREE scripts into /miniconda3/envs/shogun/bin.

The only files in the input directory are .fna files and the log file from shi7en.

Any thoughts on how to fix this?

Access to prebuild databases

hello!

Thank you for developing SHOGUN, I want to run the tool with the pre-build databases but it looks like the URLs in the shogun_db_links.txt file are outdated. Is there somewhere else I can download the databases from?

This is the message I am getting:

$ wget -i shogun_db_links.txt
--2024-04-23 12:05:44--  http://metagenome.cs.umn.edu/public/shogun-db/metadata.yaml
Resolving metagenome.cs.umn.edu (metagenome.cs.umn.edu)... 128.101.96.204
Connecting to metagenome.cs.umn.edu (metagenome.cs.umn.edu)|128.101.96.204|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://knightslab.orgpublic/shogun-db/metadata.yaml [following]
--2024-04-23 12:05:44--  https://knightslab.orgpublic/shogun-db/metadata.yaml
Resolving knightslab.orgpublic (knightslab.orgpublic)... failed: Name or service not known.
wget: unable to resolve host address โ€˜knightslab.orgpublicโ€™
--2024-04-23 12:05:59--  http://metagenome.cs.umn.edu/public/shogun-db/rep82.fna
Connecting to metagenome.cs.umn.edu (metagenome.cs.umn.edu)|128.101.96.204|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://knightslab.orgpublic/shogun-db/rep82.fna [following]
--2024-04-23 12:06:00--  https://knightslab.orgpublic/shogun-db/rep82.fna
Resolving knightslab.orgpublic (knightslab.orgpublic)... failed: Name or service not known.
wget: unable to resolve host address โ€˜knightslab.orgpublicโ€™
--2024-04-23 12:06:10--  http://metagenome.cs.umn.edu/public/shogun-db/rep82.tax
Reusing existing connection to metagenome.cs.umn.edu:80.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://knightslab.orgpublic/shogun-db/rep82.tax [following]
--2024-04-23 12:06:10--  https://knightslab.orgpublic/shogun-db/rep82.tax
Resolving knightslab.orgpublic (knightslab.orgpublic)... failed: Name or service not known.
wget: unable to resolve host address โ€˜knightslab.orgpublicโ€™
--2024-04-23 12:06:20--  http://metagenome.cs.umn.edu/public/shogun-db/sheared_bayes.txt
Reusing existing connection to metagenome.cs.umn.edu:80.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://knightslab.orgpublic/shogun-db/sheared_bayes.txt [following]
--2024-04-23 12:06:20--  https://knightslab.orgpublic/shogun-db/sheared_bayes.txt
Resolving knightslab.orgpublic (knightslab.orgpublic)... failed: Name or service not known.
wget: unable to resolve host address โ€˜knightslab.orgpublicโ€™
--2024-04-23 12:06:30--  http://metagenome.cs.umn.edu/public/shogun-db/utree/rep82.gg.ctr
Reusing existing connection to metagenome.cs.umn.edu:80.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://knightslab.orgpublic/shogun-db/utree/rep82.gg.ctr [following]
--2024-04-23 12:06:30--  https://knightslab.orgpublic/shogun-db/utree/rep82.gg.ctr
Resolving knightslab.orgpublic (knightslab.orgpublic)... failed: Name or service not known.
wget: unable to resolve host address โ€˜knightslab.orgpublicโ€™
--2024-04-23 12:06:40--  http://metagenome.cs.umn.edu/public/shogun-db/utree/rep82.gg.log
Reusing existing connection to metagenome.cs.umn.edu:80.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://knightslab.orgpublic/shogun-db/utree/rep82.gg.log [following]
--2024-04-23 12:06:40--  https://knightslab.orgpublic/shogun-db/utree/rep82.gg.log
Resolving knightslab.orgpublic (knightslab.orgpublic)... 

Thanks in advance!

Test failed after installing with conda

Any suggestion?

$ python -m unittest discover shogun
/home/zhou/soft/SHOGUN/shogun/__main__.py:109: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if level is not 'off':
...E./home/zhou/soft/SHOGUN/shogun/aligners/burst_aligner.py:79: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  df = pd.DataFrame(samples_lca_map, dtype=np.int).fillna(0).astype(np.int)
.E.../home/zhou/soft/SHOGUN/shogun/function/_function.py:99: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  kegg_table = np.zeros((num_samples, num_kegg_ids), dtype=np.int)
12/14/2021 07:30:10 PM : WARNING : Overlap of taxa and function 0.38
/home/zhou/soft/SHOGUN/shogun/function/_function.py:118: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  columns=sorted(column_names, key=column_names.get), dtype=np.int).T
/home/zhou/soft/SHOGUN/shogun/function/_function.py:99: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  kegg_table = np.zeros((num_samples, num_kegg_ids), dtype=np.int)
/home/zhou/soft/SHOGUN/shogun/function/_function.py:118: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  columns=sorted(column_names, key=column_names.get), dtype=np.int).T
/home/zhou/soft/SHOGUN/shogun/function/_function.py:99: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  kegg_table = np.zeros((num_samples, num_kegg_ids), dtype=np.int)
/home/zhou/soft/SHOGUN/shogun/function/_function.py:118: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  columns=sorted(column_names, key=column_names.get), dtype=np.int).T
.....12/14/2021 07:30:20 PM : DEBUG : Initiate Logger bowtie2
12/14/2021 07:30:20 PM : DEBUG : bowtie2 --no-unal -x /home/zhou/soft/SHOGUN/shogun/tests/data/bowtie2/genomes.small -S /tmp/shogun-temp-dir-_v06qqbg/alignment.bowtie2.sam --np 1 --mp "1,1" --rdg "0,1" --rfg "0,1" --score-min "L,0,-0.02" -f /home/zhou/soft/SHOGUN/shogun/tests/data/combined_seqs.fna --very-sensitive -k 16 -p 96 --reorder --no-hd
12/14/2021 07:30:21 PM : DEBUG : 300 reads; of these:
12/14/2021 07:30:21 PM : DEBUG :   300 (100.00%) were unpaired; of these:
12/14/2021 07:30:21 PM : DEBUG :     112 (37.33%) aligned 0 times
12/14/2021 07:30:21 PM : DEBUG :     161 (53.67%) aligned exactly 1 time
12/14/2021 07:30:21 PM : DEBUG :     27 (9.00%) aligned >1 times
12/14/2021 07:30:21 PM : DEBUG : 62.67% overall alignment rate
12/14/2021 07:30:21 PM : DEBUG : 1.17 seconds
12/14/2021 07:30:21 PM : DEBUG : Subprocess finished.
12/14/2021 07:30:21 PM : DEBUG : Beginning post align with aligner bowtie2
12/14/2021 07:30:21 PM : DEBUG : strain
12/14/2021 07:30:21 PM : DEBUG : Beginning redistribution for file: /tmp/shogun-temp-dir-_v06qqbg/taxatable.bowtie2.txt
12/14/2021 07:30:21 PM : DEBUG : Attempting to load the database metadata file at /home/zhou/soft/SHOGUN/shogun/tests/data/metadata.yaml
.12/14/2021 07:30:22 PM : DEBUG : Initiate Logger burst
12/14/2021 07:30:22 PM : DEBUG : burst15 --queries /home/zhou/soft/SHOGUN/shogun/tests/data/combined_seqs.fna --references /home/zhou/soft/SHOGUN/shogun/tests/data/burst/genomes.small.edx --output /tmp/shogun-temp-dir-dcftb54i/alignment.burst.b6 --threads 96 --mode CAPITALIST --id 0.98 --npenalize --skipambig --forwardreverse --taxonomy /home/zhou/soft/SHOGUN/shogun/tests/data/genomes.small.tax --taxacut 5
F12/14/2021 07:30:22 PM : DEBUG : Initiate Logger utree
12/14/2021 07:30:22 PM : DEBUG : utree-search_gg /home/zhou/soft/SHOGUN/shogun/tests/data/utree/genomes.small.ctr /home/zhou/soft/SHOGUN/shogun/tests/data/combined_seqs.fna /tmp/shogun-temp-dir-9vcarvdg/alignment.utree.tsv 96 RC
F.........................12/14/2021 07:30:22 PM : DEBUG : bowtie2 --no-unal -x /home/zhou/soft/SHOGUN/shogun/tests/data/bowtie2/genomes.small -S /tmp/shogun-test-temp-qre7e33a/sims.sam --np 1 --mp "1,1" --rdg "0,1" --rfg "0,1" --score-min "L,0,-0.02" -f /home/zhou/soft/SHOGUN/shogun/tests/data/combined_seqs.fna --very-sensitive -k 16 -p 1 --reorder --no-hd
12/14/2021 07:30:22 PM : DEBUG : 300 reads; of these:
12/14/2021 07:30:22 PM : DEBUG :   300 (100.00%) were unpaired; of these:
12/14/2021 07:30:22 PM : DEBUG :     112 (37.33%) aligned 0 times
12/14/2021 07:30:22 PM : DEBUG :     161 (53.67%) aligned exactly 1 time
12/14/2021 07:30:22 PM : DEBUG :     27 (9.00%) aligned >1 times
12/14/2021 07:30:22 PM : DEBUG : 62.67% overall alignment rate
12/14/2021 07:30:22 PM : DEBUG : 0.23 seconds
12/14/2021 07:30:22 PM : DEBUG : Subprocess finished.
.12/14/2021 07:30:22 PM : DEBUG : bowtie2-build -f /home/zhou/soft/SHOGUN/shogun/tests/data/genomes.small.fna /tmp/shogun-test-temp-fr3jjj80/genomes.small
12/14/2021 07:30:22 PM : DEBUG : Settings:
12/14/2021 07:30:22 PM : DEBUG :   Output files: "/tmp/shogun-test-temp-fr3jjj80/genomes.small.*.bt2"
12/14/2021 07:30:22 PM : DEBUG :   Line rate: 6 (line is 64 bytes)
12/14/2021 07:30:22 PM : DEBUG :   Lines per side: 1 (side is 64 bytes)
12/14/2021 07:30:22 PM : DEBUG :   Offset rate: 4 (one in 16)
12/14/2021 07:30:22 PM : DEBUG :   FTable chars: 10
12/14/2021 07:30:22 PM : DEBUG :   Strings: unpacked
12/14/2021 07:30:22 PM : DEBUG :   Max bucket size: default
12/14/2021 07:30:22 PM : DEBUG :   Max bucket size, sqrt multiplier: default
12/14/2021 07:30:22 PM : DEBUG :   Max bucket size, len divisor: 4
12/14/2021 07:30:22 PM : DEBUG :   Difference-cover sample period: 1024
12/14/2021 07:30:22 PM : DEBUG :   Endianness: little
12/14/2021 07:30:22 PM : DEBUG :   Actual local endianness: little
12/14/2021 07:30:22 PM : DEBUG :   Sanity checking: disabled
12/14/2021 07:30:22 PM : DEBUG :   Assertions: disabled
12/14/2021 07:30:22 PM : DEBUG :   Random seed: 0
12/14/2021 07:30:22 PM : DEBUG :   Sizeofs: void*:8, int:4, long:8, size_t:8
12/14/2021 07:30:22 PM : DEBUG : Input files DNA, FASTA:
12/14/2021 07:30:22 PM : DEBUG :   /home/zhou/soft/SHOGUN/shogun/tests/data/genomes.small.fna
12/14/2021 07:30:22 PM : DEBUG : Building a SMALL index
12/14/2021 07:30:22 PM : DEBUG : Reading reference sizes
12/14/2021 07:30:22 PM : DEBUG :   Time reading reference sizes: 00:00:00
12/14/2021 07:30:22 PM : DEBUG : Calculating joined length
12/14/2021 07:30:22 PM : DEBUG : Writing header
12/14/2021 07:30:22 PM : DEBUG : Reserving space for joined string
12/14/2021 07:30:22 PM : DEBUG : Joining reference sequences
12/14/2021 07:30:22 PM : DEBUG :   Time to join reference sequences: 00:00:00
12/14/2021 07:30:22 PM : DEBUG : bmax according to bmaxDivN setting: 195017
12/14/2021 07:30:22 PM : DEBUG : Using parameters --bmax 146263 --dcv 1024
12/14/2021 07:30:22 PM : DEBUG :   Doing ahead-of-time memory usage test
12/14/2021 07:30:22 PM : DEBUG :   Passed!  Constructing with these parameters: --bmax 146263 --dcv 1024
12/14/2021 07:30:22 PM : DEBUG : Constructing suffix-array element generator
12/14/2021 07:30:22 PM : DEBUG : Building DifferenceCoverSample
12/14/2021 07:30:22 PM : DEBUG :   Building sPrime
12/14/2021 07:30:22 PM : DEBUG :   Building sPrimeOrder
12/14/2021 07:30:22 PM : DEBUG :   V-Sorting samples
12/14/2021 07:30:22 PM : DEBUG :   V-Sorting samples time: 00:00:00
12/14/2021 07:30:22 PM : DEBUG :   Allocating rank array
12/14/2021 07:30:22 PM : DEBUG :   Ranking v-sort output
12/14/2021 07:30:22 PM : DEBUG :   Ranking v-sort output time: 00:00:00
12/14/2021 07:30:22 PM : DEBUG :   Invoking Larsson-Sadakane on ranks
12/14/2021 07:30:22 PM : DEBUG :   Invoking Larsson-Sadakane on ranks time: 00:00:00
12/14/2021 07:30:22 PM : DEBUG :   Sanity-checking and returning
12/14/2021 07:30:22 PM : DEBUG : Building samples
12/14/2021 07:30:22 PM : DEBUG : Reserving space for 12 sample suffixes
12/14/2021 07:30:22 PM : DEBUG : Generating random suffixes
12/14/2021 07:30:22 PM : DEBUG : QSorting 12 sample offsets, eliminating duplicates
12/14/2021 07:30:22 PM : DEBUG : QSorting sample offsets, eliminating duplicates time: 00:00:00
12/14/2021 07:30:22 PM : DEBUG : Multikey QSorting 12 samples
12/14/2021 07:30:22 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:22 PM : DEBUG :   Multikey QSorting samples time: 00:00:00
12/14/2021 07:30:22 PM : DEBUG : Calculating bucket sizes
12/14/2021 07:30:22 PM : DEBUG : Splitting and merging
12/14/2021 07:30:22 PM : DEBUG :   Splitting and merging time: 00:00:00
12/14/2021 07:30:22 PM : DEBUG : Avg bucket size: 97507.8 (target: 146262)
12/14/2021 07:30:22 PM : DEBUG : Converting suffix-array elements to index image
12/14/2021 07:30:22 PM : DEBUG : Allocating ftab, absorbFtab
12/14/2021 07:30:22 PM : DEBUG : Entering Ebwt loop
12/14/2021 07:30:22 PM : DEBUG : Getting block 1 of 8
12/14/2021 07:30:22 PM : DEBUG :   Reserving size (146263) for bucket 1
12/14/2021 07:30:22 PM : DEBUG :   Calculating Z arrays for bucket 1
12/14/2021 07:30:22 PM : DEBUG :   Entering block accumulator loop for bucket 1:
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 10%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 20%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 30%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 40%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 50%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 60%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 70%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 80%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 90%
12/14/2021 07:30:22 PM : DEBUG :   bucket 1: 100%
12/14/2021 07:30:22 PM : DEBUG :   Sorting block of length 105140 for bucket 1
12/14/2021 07:30:22 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:22 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:22 PM : DEBUG : Returning block of 105141 for bucket 1
12/14/2021 07:30:22 PM : DEBUG : Getting block 2 of 8
12/14/2021 07:30:22 PM : DEBUG :   Reserving size (146263) for bucket 2
12/14/2021 07:30:22 PM : DEBUG :   Calculating Z arrays for bucket 2
12/14/2021 07:30:22 PM : DEBUG :   Entering block accumulator loop for bucket 2:
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 10%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 20%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 30%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 40%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 50%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 60%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 70%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 80%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 90%
12/14/2021 07:30:22 PM : DEBUG :   bucket 2: 100%
12/14/2021 07:30:22 PM : DEBUG :   Sorting block of length 117607 for bucket 2
12/14/2021 07:30:22 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:22 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:22 PM : DEBUG : Returning block of 117608 for bucket 2
12/14/2021 07:30:23 PM : DEBUG : Getting block 3 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 3
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 3
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 3:
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 56295 for bucket 3
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 56296 for bucket 3
12/14/2021 07:30:23 PM : DEBUG : Getting block 4 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 4
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 4
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 4:
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 101433 for bucket 4
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 101434 for bucket 4
12/14/2021 07:30:23 PM : DEBUG : Getting block 5 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 5
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 5
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 5:
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 125010 for bucket 5
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 125011 for bucket 5
12/14/2021 07:30:23 PM : DEBUG : Getting block 6 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 6
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 6
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 6:
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 119654 for bucket 6
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 119655 for bucket 6
12/14/2021 07:30:23 PM : DEBUG : Getting block 7 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 7
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 7
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 7:
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 122986 for bucket 7
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 122987 for bucket 7
12/14/2021 07:30:23 PM : DEBUG : Getting block 8 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 8
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 8
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 8:
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 31937 for bucket 8
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 31938 for bucket 8
12/14/2021 07:30:23 PM : DEBUG : Exited Ebwt loop
12/14/2021 07:30:23 PM : DEBUG : fchr[A]: 0
12/14/2021 07:30:23 PM : DEBUG : fchr[C]: 226540
12/14/2021 07:30:23 PM : DEBUG : fchr[G]: 388187
12/14/2021 07:30:23 PM : DEBUG : fchr[T]: 554183
12/14/2021 07:30:23 PM : DEBUG : fchr[$]: 780069
12/14/2021 07:30:23 PM : DEBUG : Exiting Ebwt::buildToDisk()
12/14/2021 07:30:23 PM : DEBUG : Returning from initFromVector
12/14/2021 07:30:23 PM : DEBUG : Wrote 4454705 bytes to primary EBWT file: /tmp/shogun-test-temp-fr3jjj80/genomes.small.1.bt2
12/14/2021 07:30:23 PM : DEBUG : Wrote 195024 bytes to secondary EBWT file: /tmp/shogun-test-temp-fr3jjj80/genomes.small.2.bt2
12/14/2021 07:30:23 PM : DEBUG : Re-opening _in1 and _in2 as input streams
12/14/2021 07:30:23 PM : DEBUG : Returning from Ebwt constructor
12/14/2021 07:30:23 PM : DEBUG : Headers:
12/14/2021 07:30:23 PM : DEBUG :     len: 780069
12/14/2021 07:30:23 PM : DEBUG :     bwtLen: 780070
12/14/2021 07:30:23 PM : DEBUG :     sz: 195018
12/14/2021 07:30:23 PM : DEBUG :     bwtSz: 195018
12/14/2021 07:30:23 PM : DEBUG :     lineRate: 6
12/14/2021 07:30:23 PM : DEBUG :     offRate: 4
12/14/2021 07:30:23 PM : DEBUG :     offMask: 0xfffffff0
12/14/2021 07:30:23 PM : DEBUG :     ftabChars: 10
12/14/2021 07:30:23 PM : DEBUG :     eftabLen: 20
12/14/2021 07:30:23 PM : DEBUG :     eftabSz: 80
12/14/2021 07:30:23 PM : DEBUG :     ftabLen: 1048577
12/14/2021 07:30:23 PM : DEBUG :     ftabSz: 4194308
12/14/2021 07:30:23 PM : DEBUG :     offsLen: 48755
12/14/2021 07:30:23 PM : DEBUG :     offsSz: 195020
12/14/2021 07:30:23 PM : DEBUG :     lineSz: 64
12/14/2021 07:30:23 PM : DEBUG :     sideSz: 64
12/14/2021 07:30:23 PM : DEBUG :     sideBwtSz: 48
12/14/2021 07:30:23 PM : DEBUG :     sideBwtLen: 192
12/14/2021 07:30:23 PM : DEBUG :     numSides: 4063
12/14/2021 07:30:23 PM : DEBUG :     numLines: 4063
12/14/2021 07:30:23 PM : DEBUG :     ebwtTotLen: 260032
12/14/2021 07:30:23 PM : DEBUG :     ebwtTotSz: 260032
12/14/2021 07:30:23 PM : DEBUG :     color: 0
12/14/2021 07:30:23 PM : DEBUG :     reverse: 0
12/14/2021 07:30:23 PM : DEBUG : Total time for call to driver() for forward index: 00:00:01
12/14/2021 07:30:23 PM : DEBUG : Reading reference sizes
12/14/2021 07:30:23 PM : DEBUG :   Time reading reference sizes: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Calculating joined length
12/14/2021 07:30:23 PM : DEBUG : Writing header
12/14/2021 07:30:23 PM : DEBUG : Reserving space for joined string
12/14/2021 07:30:23 PM : DEBUG : Joining reference sequences
12/14/2021 07:30:23 PM : DEBUG :   Time to join reference sequences: 00:00:00
12/14/2021 07:30:23 PM : DEBUG :   Time to reverse reference sequence: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : bmax according to bmaxDivN setting: 195017
12/14/2021 07:30:23 PM : DEBUG : Using parameters --bmax 146263 --dcv 1024
12/14/2021 07:30:23 PM : DEBUG :   Doing ahead-of-time memory usage test
12/14/2021 07:30:23 PM : DEBUG :   Passed!  Constructing with these parameters: --bmax 146263 --dcv 1024
12/14/2021 07:30:23 PM : DEBUG : Constructing suffix-array element generator
12/14/2021 07:30:23 PM : DEBUG : Building DifferenceCoverSample
12/14/2021 07:30:23 PM : DEBUG :   Building sPrime
12/14/2021 07:30:23 PM : DEBUG :   Building sPrimeOrder
12/14/2021 07:30:23 PM : DEBUG :   V-Sorting samples
12/14/2021 07:30:23 PM : DEBUG :   V-Sorting samples time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG :   Allocating rank array
12/14/2021 07:30:23 PM : DEBUG :   Ranking v-sort output
12/14/2021 07:30:23 PM : DEBUG :   Ranking v-sort output time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG :   Invoking Larsson-Sadakane on ranks
12/14/2021 07:30:23 PM : DEBUG :   Invoking Larsson-Sadakane on ranks time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG :   Sanity-checking and returning
12/14/2021 07:30:23 PM : DEBUG : Building samples
12/14/2021 07:30:23 PM : DEBUG : Reserving space for 12 sample suffixes
12/14/2021 07:30:23 PM : DEBUG : Generating random suffixes
12/14/2021 07:30:23 PM : DEBUG : QSorting 12 sample offsets, eliminating duplicates
12/14/2021 07:30:23 PM : DEBUG : QSorting sample offsets, eliminating duplicates time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Multikey QSorting 12 samples
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Multikey QSorting samples time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Calculating bucket sizes
12/14/2021 07:30:23 PM : DEBUG : Splitting and merging
12/14/2021 07:30:23 PM : DEBUG :   Splitting and merging time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Split 1, merged 6; iterating...
12/14/2021 07:30:23 PM : DEBUG : Splitting and merging
12/14/2021 07:30:23 PM : DEBUG :   Splitting and merging time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Avg bucket size: 97507.8 (target: 146262)
12/14/2021 07:30:23 PM : DEBUG : Converting suffix-array elements to index image
12/14/2021 07:30:23 PM : DEBUG : Allocating ftab, absorbFtab
12/14/2021 07:30:23 PM : DEBUG : Entering Ebwt loop
12/14/2021 07:30:23 PM : DEBUG : Getting block 1 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 1
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 1
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 1:
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 1: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 54079 for bucket 1
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 54080 for bucket 1
12/14/2021 07:30:23 PM : DEBUG : Getting block 2 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 2
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 2
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 2:
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 2: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 139565 for bucket 2
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 139566 for bucket 2
12/14/2021 07:30:23 PM : DEBUG : Getting block 3 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 3
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 3
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 3:
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 3: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 36894 for bucket 3
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 36895 for bucket 3
12/14/2021 07:30:23 PM : DEBUG : Getting block 4 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 4
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 4
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 4:
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 4: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 134423 for bucket 4
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 134424 for bucket 4
12/14/2021 07:30:23 PM : DEBUG : Getting block 5 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 5
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 5
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 5:
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 5: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 119960 for bucket 5
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 119961 for bucket 5
12/14/2021 07:30:23 PM : DEBUG : Getting block 6 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 6
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 6
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 6:
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 6: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 131326 for bucket 6
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 131327 for bucket 6
12/14/2021 07:30:23 PM : DEBUG : Getting block 7 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 7
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 7
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 7:
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 7: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 111161 for bucket 7
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:23 PM : DEBUG :   Sorting block time: 00:00:00
12/14/2021 07:30:23 PM : DEBUG : Returning block of 111162 for bucket 7
12/14/2021 07:30:23 PM : DEBUG : Getting block 8 of 8
12/14/2021 07:30:23 PM : DEBUG :   Reserving size (146263) for bucket 8
12/14/2021 07:30:23 PM : DEBUG :   Calculating Z arrays for bucket 8
12/14/2021 07:30:23 PM : DEBUG :   Entering block accumulator loop for bucket 8:
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 10%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 20%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 30%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 40%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 50%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 60%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 70%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 80%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 90%
12/14/2021 07:30:23 PM : DEBUG :   bucket 8: 100%
12/14/2021 07:30:23 PM : DEBUG :   Sorting block of length 52654 for bucket 8
12/14/2021 07:30:23 PM : DEBUG :   (Using difference cover)
12/14/2021 07:30:24 PM : DEBUG :   Sorting block time: 00:00:01
12/14/2021 07:30:24 PM : DEBUG : Returning block of 52655 for bucket 8
12/14/2021 07:30:24 PM : DEBUG : Exited Ebwt loop
12/14/2021 07:30:24 PM : DEBUG : fchr[A]: 0
12/14/2021 07:30:24 PM : DEBUG : fchr[C]: 226540
12/14/2021 07:30:24 PM : DEBUG : fchr[G]: 388187
12/14/2021 07:30:24 PM : DEBUG : fchr[T]: 554183
12/14/2021 07:30:24 PM : DEBUG : fchr[$]: 780069
12/14/2021 07:30:24 PM : DEBUG : Exiting Ebwt::buildToDisk()
12/14/2021 07:30:24 PM : DEBUG : Returning from initFromVector
12/14/2021 07:30:24 PM : DEBUG : Wrote 4454705 bytes to primary EBWT file: /tmp/shogun-test-temp-fr3jjj80/genomes.small.rev.1.bt2
12/14/2021 07:30:24 PM : DEBUG : Wrote 195024 bytes to secondary EBWT file: /tmp/shogun-test-temp-fr3jjj80/genomes.small.rev.2.bt2
12/14/2021 07:30:24 PM : DEBUG : Re-opening _in1 and _in2 as input streams
12/14/2021 07:30:24 PM : DEBUG : Returning from Ebwt constructor
12/14/2021 07:30:24 PM : DEBUG : Headers:
12/14/2021 07:30:24 PM : DEBUG :     len: 780069
12/14/2021 07:30:24 PM : DEBUG :     bwtLen: 780070
12/14/2021 07:30:24 PM : DEBUG :     sz: 195018
12/14/2021 07:30:24 PM : DEBUG :     bwtSz: 195018
12/14/2021 07:30:24 PM : DEBUG :     lineRate: 6
12/14/2021 07:30:24 PM : DEBUG :     offRate: 4
12/14/2021 07:30:24 PM : DEBUG :     offMask: 0xfffffff0
12/14/2021 07:30:24 PM : DEBUG :     ftabChars: 10
12/14/2021 07:30:24 PM : DEBUG :     eftabLen: 20
12/14/2021 07:30:24 PM : DEBUG :     eftabSz: 80
12/14/2021 07:30:24 PM : DEBUG :     ftabLen: 1048577
12/14/2021 07:30:24 PM : DEBUG :     ftabSz: 4194308
12/14/2021 07:30:24 PM : DEBUG :     offsLen: 48755
12/14/2021 07:30:24 PM : DEBUG :     offsSz: 195020
12/14/2021 07:30:24 PM : DEBUG :     lineSz: 64
12/14/2021 07:30:24 PM : DEBUG :     sideSz: 64
12/14/2021 07:30:24 PM : DEBUG :     sideBwtSz: 48
12/14/2021 07:30:24 PM : DEBUG :     sideBwtLen: 192
12/14/2021 07:30:24 PM : DEBUG :     numSides: 4063
12/14/2021 07:30:24 PM : DEBUG :     numLines: 4063
12/14/2021 07:30:24 PM : DEBUG :     ebwtTotLen: 260032
12/14/2021 07:30:24 PM : DEBUG :     ebwtTotSz: 260032
12/14/2021 07:30:24 PM : DEBUG :     color: 0
12/14/2021 07:30:24 PM : DEBUG :     reverse: 1
12/14/2021 07:30:24 PM : DEBUG : Total time for backward call to driver() for mirror index: 00:00:01
12/14/2021 07:30:24 PM : DEBUG : 1.49 seconds
12/14/2021 07:30:24 PM : DEBUG : Subprocess finished.
..12/14/2021 07:30:24 PM : DEBUG : burst15 --queries /home/zhou/soft/SHOGUN/shogun/tests/data/combined_seqs.fna --references /home/zhou/soft/SHOGUN/shogun/tests/data/burst/genomes.small.edx --output /tmp/shogun-test-temp-7qad189c/sims.b6 --threads 1 --mode CAPITALIST --id 0.98 --npenalize --skipambig --forwardreverse --taxonomy /home/zhou/soft/SHOGUN/shogun/tests/data/genomes.small.tax --taxacut 5
E12/14/2021 07:30:24 PM : DEBUG : burst15 --references /home/zhou/soft/SHOGUN/shogun/tests/data/genomes.small.fna --output /tmp/shogun-test-temp-oqpdpg0u/genomes.small.edb --npenalize --makedb --fingerprint --shear 500 --clustradius 1050
EF12/14/2021 07:30:24 PM : DEBUG : utree-search_gg /home/zhou/soft/SHOGUN/shogun/tests/data/utree/genomes.small.gg.ctr /home/zhou/soft/SHOGUN/shogun/tests/data/combined_seqs.fna /tmp/shogun-test-temp-z5xw593k/utree_gg-test-sims.txt 1 RC
E12/14/2021 07:30:24 PM : DEBUG : utree-build_gg /home/zhou/soft/SHOGUN/shogun/tests/data/genomes.small.fna /home/zhou/soft/SHOGUN/shogun/tests/data/genomes.small.tax /tmp/shogun-test-temp-9yim61nd/genomes.small.gg.utr 1 2
EF
======================================================================
ERROR: test_burst_align (aligners.tests.test_aligner.TestAligner)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/aligners/tests/test_aligner.py", line 33, in test_burst_align
    self.assertTrue(aligner.align(infile, outdir)[0] == 0)
  File "/home/zhou/soft/SHOGUN/shogun/aligners/burst_aligner.py", line 55, in align
    proc, out, err = burst_align(infile, self.outfile,
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/burst_wrapper.py", line 86, in burst_align
    return run_command(cmd, shell=shell)
  File "/home/zhou/soft/SHOGUN/shogun/utils/_utils.py", line 54, in run_command
    with subprocess.Popen(
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 951, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 1821, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'burst15'

======================================================================
ERROR: test_utree_align (aligners.tests.test_aligner.TestAligner)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/aligners/tests/test_aligner.py", line 55, in test_utree_align
    self.assertTrue(aligner.align(infile, outdir)[0] == 0)
  File "/home/zhou/soft/SHOGUN/shogun/aligners/utree_aligner.py", line 36, in align
    proc, out, err = utree_search_gg(self.compressed_tree, infile, outfile, threads=self.threads, shell=self.shell)
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/utree_wrapper.py", line 66, in utree_search_gg
    return run_command(cmd, shell=shell)
  File "/home/zhou/soft/SHOGUN/shogun/utils/_utils.py", line 54, in run_command
    with subprocess.Popen(
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 951, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 1821, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'utree-search_gg'

======================================================================
ERROR: test_burst_align (wrappers.tests.test_burst.TestBurst)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/tests/test_burst.py", line 33, in test_burst_align
    self.assertTrue(burst_align(infile, outfile, database, tax=tax)[0] == 0)
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/burst_wrapper.py", line 86, in burst_align
    return run_command(cmd, shell=shell)
  File "/home/zhou/soft/SHOGUN/shogun/utils/_utils.py", line 54, in run_command
    with subprocess.Popen(
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 951, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 1821, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'burst15'

======================================================================
ERROR: test_burst_build (wrappers.tests.test_burst.TestBurst)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/tests/test_burst.py", line 39, in test_burst_build
    print(burst_build(fasta, outfile, shell=False, clustradius=1050, shear=500))
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/burst_wrapper.py", line 108, in burst_build
    return run_command(cmd, shell=shell)
  File "/home/zhou/soft/SHOGUN/shogun/utils/_utils.py", line 54, in run_command
    with subprocess.Popen(
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 951, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 1821, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'burst15'

======================================================================
ERROR: test_utree_align_gg (wrappers.tests.test_utree.TestUtree)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/tests/test_utree.py", line 49, in test_utree_align_gg
    self.assertTrue(utree_search_gg(database, infile, outfile)[0] == 0)
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/utree_wrapper.py", line 66, in utree_search_gg
    return run_command(cmd, shell=shell)
  File "/home/zhou/soft/SHOGUN/shogun/utils/_utils.py", line 54, in run_command
    with subprocess.Popen(
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 951, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 1821, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'utree-search_gg'

======================================================================
ERROR: test_utree_build_gg (wrappers.tests.test_utree.TestUtree)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/tests/test_utree.py", line 40, in test_utree_build_gg
    utree_build_gg(fasta, tax, outfile_uncompressed, shell=False)
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/utree_wrapper.py", line 32, in utree_build_gg
    return run_command(cmd, shell=shell)
  File "/home/zhou/soft/SHOGUN/shogun/utils/_utils.py", line 54, in run_command
    with subprocess.Popen(
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 951, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/zhou/miniconda2/envs/shogun/lib/python3.9/subprocess.py", line 1821, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'utree-build_gg'

======================================================================
FAIL: test_burst_pipeline (tests.test_pipeline.TestAligner)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/tests/test_pipeline.py", line 86, in test_burst_pipeline
    self.assertTrue(len(outfile_ra) == 1)
AssertionError: False is not true

======================================================================
FAIL: test_utree_pipeline (tests.test_pipeline.TestAligner)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/tests/test_pipeline.py", line 38, in test_utree_pipeline
    self.assertTrue(len(outfile_ra) == 1)
AssertionError: False is not true

======================================================================
FAIL: test_burst_path (wrappers.tests.test_burst.TestBurst)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/tests/test_burst.py", line 26, in test_burst_path
    self.assertTrue(shutil.which("burst15") is not None)
AssertionError: False is not true

======================================================================
FAIL: test_utree_path (wrappers.tests.test_utree.TestUtree)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/zhou/soft/SHOGUN/shogun/wrappers/tests/test_utree.py", line 31, in test_utree_path
    self.assertTrue(shutil.which("utree-build_gg") is not None)
AssertionError: False is not true

----------------------------------------------------------------------
Ran 52 tests in 33.336s

FAILED (failures=4, errors=6)

The conda environment:

$ conda list
# packages in environment at /home/zhou/miniconda2/envs/shogun:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                 conda_forge    conda-forge
_openmp_mutex             4.5                       1_gnu    conda-forge
bowtie2                   2.4.4            py39hbb4e92a_0    bioconda
ca-certificates           2021.10.26           h06a4308_2    defaults
click                     8.0.3            py39hf3d152e_1    conda-forge
cytoolz                   0.11.2           py39h3811e60_1    conda-forge
ld_impl_linux-64          2.36.1               hea4e1c9_2    conda-forge
libblas                   3.9.0           12_linux64_openblas    conda-forge
libcblas                  3.9.0           12_linux64_openblas    conda-forge
libffi                    3.4.2                h7f98852_5    conda-forge
libgcc-ng                 11.2.0              h1d223b6_11    conda-forge
libgfortran-ng            11.2.0              h69a702a_11    conda-forge
libgfortran5              11.2.0              h5c6108e_11    conda-forge
libgomp                   11.2.0              h1d223b6_11    conda-forge
liblapack                 3.9.0           12_linux64_openblas    conda-forge
libopenblas               0.3.18          pthreads_h8fe5266_0    conda-forge
libstdcxx-ng              11.2.0              he4da1e4_11    conda-forge
libzlib                   1.2.11            h36c2ea0_1013    conda-forge
lz4-c                     1.9.3                h9c3ff4c_1    conda-forge
ncurses                   6.2                  h58526e2_4    conda-forge
numpy                     1.21.4           py39hdbf815f_0    conda-forge
openssl                   3.0.0                h7f98852_2    conda-forge
pandas                    1.3.4            py39hde0f152_1    conda-forge
perl                      5.32.1          1_h7f98852_perl5    conda-forge
pip                       21.3.1             pyhd8ed1ab_0    conda-forge
python                    3.9.7           hf930737_3_cpython    conda-forge
python-dateutil           2.8.2              pyhd8ed1ab_0    conda-forge
python_abi                3.9                      2_cp39    conda-forge
pytz                      2021.3             pyhd8ed1ab_0    conda-forge
pyyaml                    6.0              py39h3811e60_3    conda-forge
readline                  8.1                  h46c0cb4_0    conda-forge
scipy                     1.7.3            py39hee8e79c_0    conda-forge
setuptools                59.4.0           py39hf3d152e_0    conda-forge
shogun                    1.0.8                    pypi_0    pypi
six                       1.16.0             pyh6c4a22f_0    conda-forge
sqlite                    3.37.0               h9cd32fc_0    conda-forge
tbb                       2020.3               hfd86e86_0    defaults
tk                        8.6.11               h27826a3_1    conda-forge
toolz                     0.11.2             pyhd8ed1ab_0    conda-forge
tzdata                    2021e                he74cb21_0    conda-forge
wheel                     0.37.0             pyhd8ed1ab_1    conda-forge
xz                        5.2.5                h516909a_1    conda-forge
yaml                      0.2.5                h516909a_0    conda-forge
zlib                      1.2.11            h36c2ea0_1013    conda-forge
zstd                      1.4.9                ha95c52a_0    conda-forge

Testing conda installation

The README instructions for testing the installation say:

change directory into the root folder of the repository

But the installation instructions don't indicate any repository directory. Perhaps add something about where to find this directory for a conda installation?

SHOGUN creates empty files

Hi all,

I've successfully installed SHOGUN (tests run with no apparent errors), but each step of the pipeline results in empty files, whether they're run individually or in pipeline mode.

SHOGUN fails silently when bowtie2 runs out of memory

Working through a dataset, I found that most of the resulting alignments only included 100K-200K sequence identifiers from the input dataset even though most of my samples have >1M sequences. Unsure of what was going on, I tried running bowtie2 manually (according to the command call here). That's when I noticed my OS was killing bowtie2 with signal 9:

bowtie2 --no-unal -x /[redacted]/shogun-db/bt2/rep82 -S [redacted].sam --np 1 --mp "1,1" --rdg "0,1" --rfg "0,1" --score-min '"L,0,-0.02"' -f [redacted].fna --very-sensitive -k 16 -p 16 --reorder --no-hd
(ERR): bowtie2-align died with signal 9 (KILL)

After this happened, I checked the exit code (using echo $?) and saw error code 1. As best as I can tell there's nowhere in the SHOGUN code that checks for the exit code of bowtie2. While it is being returned here:

proc, out, err = bowtie2_align(infile, outfile, self.prefix,
num_threads=self.threads, alignments_to_report=alignments_to_report, shell=self.shell, percent_id=self.percent_id)
if self.post_align:
df = self._post_align(outfile)
self.outfile = os.path.join(outdir, 'taxatable.bowtie2.txt')
df.to_csv(self.outfile, sep='\t', float_format="%d", na_rep=0, index_label="#OTU ID")
return proc, out, err

There's no checks for it in align method calls:

aligner_cl.align(input, output)

aligner_cl.align(input, output)

The worse thing about this error is that since SHOGUN won't fail or catch this error, you can successfully process a dataset and generate incomplete contingency tables. The resulting SAM file is written to disk but it obviously incomplete, unfortunately shogun assign_taxonomy doesn't know this so it just processes the dataset as expected.


In my case running on a 32GB system my samples were missing around 60-80% of their reads.

Additional documents request

Dear Knights-lab,

I'm recently interested in analyzing shallow shotgun metagenomics sequencing, and I came across SHOGUN and thought it's interesting to try.

However, I'm having trouble finding any actual vignettes or pages showing the full pipeline and flow through of SHOGUN process from sequencer fastq to some preliminary analysis and plots??

I'm relatively new to shotgun sequencing, and I get that SHOGUN is a one line command. But it will be very helpful to see the code itself and the product. For example, the setup of the required directories are rather arbitrary to novice people like me. And what files are needed etc.

In short, I have a directory of R1 and R2 fastq files that I'm hoping to analyze, but struggle to know where to begin. If there's any information you can point me to that will be greatly appreciated !

Thanks

Mac installation

Hi,

Not sure if this is a real issue as Iโ€™m very new to this.
I have encountered the following error while trying to install for Mac.

$ conda create -n shogun -c knights-lab shogun

Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed

PackagesNotFoundError: The following packages are not available from current channels:

  - shogun

Current channels:

  - https://conda.anaconda.org/knights-lab/osx-64
  - https://conda.anaconda.org/knights-lab/noarch
  - https://repo.anaconda.com/pkgs/main/osx-64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/osx-64
  - https://repo.anaconda.com/pkgs/r/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.

Would you kindly help me on how to sort this out.

Kind regards,
Shatha

Contaminate filtering takes forever

BURST any mode has been able to allow the filtering of contaminate reads quickly. The BURST any mode takes in a query sequences, and returns any valid alignment in the database above the given percent identification. This will be extremely faster than resolving the possible many hits in the contiminate database.

  • Update to use the newest version of BURST in the conda dependencies

  • Add the BURST any command as a wrapper and use it for contaminate filtering

Issues with -a option, shogun pipeline command

Hi, I would like to analyze my shallow shotgun metagenomics data according to the OGU method, https://journals.asm.org/doi/10.1128/msystems.00167-22
The recommended aligner is Shogun, especially I assume when using shallow data.
However I have issues running it:
shogun align
-i combined_seqs.fna
-a bowtie2
-d /srv/beegfs/scratch/users/p/parkr/Classifiers/WoL_Globus_full/databases/shogun
-o shogun_wol_align

I get error:
Traceback (most recent call last):
File "/var/spool/slurmd/job13597238/slurm_script", line 8, in
sys.exit(cli())
File "/opt/ebsofts/QIIME2/2021.8/lib/python3.8/site-packages/click/core.py", line 829, in call
return self.main(*args, **kwargs)
File "/opt/ebsofts/QIIME2/2021.8/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/opt/ebsofts/QIIME2/2021.8/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/ebsofts/QIIME2/2021.8/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/ebsofts/QIIME2/2021.8/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/opt/ebsofts/QIIME2/2021.8/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/users/p/parkr/.local/lib/python3.8/site-packages/shogun/main.py", line 78, in align
aligner_cl.align(input, output)
File "/home/users/p/parkr/.local/lib/python3.8/site-packages/shogun/aligners/bowtie2_aligner.py", line 32, in align
proc, out, err = bowtie2_align(infile, outfile, self.prefix,
File "/home/users/p/parkr/.local/lib/python3.8/site-packages/shogun/wrappers/bowtie2_wrapper.py", line 39, in bowtie2_align
return run_command(cmd, shell=shell)
File "/home/users/p/parkr/.local/lib/python3.8/site-packages/shogun/utils/_utils.py", line 54, in run_command
with subprocess.Popen(
File "/opt/ebsofts/QIIME2/2021.8/lib/python3.8/subprocess.py", line 858, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/ebsofts/QIIME2/2021.8/lib/python3.8/subprocess.py", line 1704, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'bowtie2'

If I don't provide -a , to use burst, then I get error :
FileNotFoundError: [Errno 2] No such file or directory: 'burst15'

Whe I give it the directory bowtie2 from WoL database:
I get error:
Error: Invalid value for '-a' / '--aligner': invalid choice: /srv/beegfs/scratch/users/p/parkr/Classifiers/WoL_Globus_full/databases/bowtie2. (choose from all, bowtie2, burst, utree)

In the WoL_Globus_full folder there are these folders/files:
image

When I copied the content of bowtie2 folder to the shogun folder, it didn't work either.
The pre-built WoL databases were downloaded from Globus ( https://biocore.github.io/wol/download )

Could you guide me how to make it work?

Thank you in advance!!

Fix to automatically build pickle files

On a new project pull, the software should lazy load pickle files needed for computation. If they can't be found at the location, then make them when they are needed.

An example bug log:
python /project/flatiron/dan/shogun/src/bin/kegg_parse_img_ids.py -i . -o kegg.csv
Traceback (most recent call last):
File "/project/flatiron/dan/shogun/src/bin/kegg_parse_img_ids.py", line 77, in
main()
File "/project/flatiron/dan/shogun/src/bin/kegg_parse_img_ids.py", line 61, in main
img_map = IMGMap.load()
File "/project/flatiron/dan/shogun/src/lib/shogun/shogun/utilities/pickle_class.py", line 25, in load
raise error
File "/project/flatiron/dan/shogun/src/lib/shogun/shogun/utilities/pickle_class.py", line 21, in load
with open(self_dump, 'rb') as handle:
FileNotFoundError: [Errno 2] No such file or directory: '/project/flatiron/dan/shogun/data/pickle/IMGMap.pkl'

combined seqs

Hi,

just a simple question to clarify something while reading through your tool,

--input data in the pipeline or even the align subsection says "combined seqs".
This will essentially mean just concatenating paired end reads, for instance ?
i.e. it will not mean actual merging paired end reads, as is known generally in paired-end sequencing

Many thanks

shogun functional fails to run with no output error

Running shogun functional results in a single error message "WARNING : Overlap of taxa and function 0.36" with no additional errors and fails to produce any functional output. Any idea about what may be going on?

CUDA incompatibility error

When trying to install I get the following error (Running on Ubuntu 18.04):

UnsatisfiableError: The following specifications were found to be incompatible with your CUDA driver:

  - feature:/linux-64::__cuda==10.2=0

Your installed CUDA driver is: 10.2

My conda environment:

(shogun) idoerg@margarita:~$ conda info

     active environment : shogun
    active env location : /home/idoerg/anaconda3/envs/shogun
            shell level : 2
       user config file : /home/idoerg/.condarc
```populated config files : 
          conda version : 4.8.3
    conda-build version : 3.18.11
         python version : 3.8.3.final.0
       virtual packages : __cuda=10.2
                          __glibc=2.27
       base environment : /home/idoerg/anaconda3  (writable)
           channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/linux-64
                          https://repo.anaconda.com/pkgs/r/noarch
          package cache : /home/idoerg/anaconda3/pkgs
                          /home/idoerg/.conda/pkgs
       envs directories : /home/idoerg/anaconda3/envs
                          /home/idoerg/.conda/envs
               platform : linux-64
             user-agent : conda/4.8.3 requests/2.24.0 CPython/3.8.3 Linux/4.19.2-041902-generic ubuntu/18.04.5 glibc/2.27
                UID:GID : 1000:1000
             netrc file : None
           offline mode : False

last_common_ancestor.py import module find_betweens error

This error occurs when I followed the README.md to run with the example dataset. The full screen output is:

(shogun) -bash-4.1$ shogun_bt2_lca -i ./mock_communities -b ./annotated/bt2/test.hmp_species
Traceback (most recent call last):
  File "/home/me/Programs/Miniconda2/envs/shogun/bin/shogun_bt2_lca", line 11, in <module>
    load_entry_point('shogun==0.0.1.dev0', 'console_scripts', 'shogun_bt2_lca')()
  File "/home/me/Programs/Miniconda2/envs/shogun/lib/python3.6/site-packages/pkg_resources/__init__.py", line 561, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/home/me/Programs/Miniconda2/envs/shogun/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2631, in load_entry_point
    return ep.load()
  File "/home/me/Programs/Miniconda2/envs/shogun/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2291, in load
    return self.resolve()
  File "/home/me/Programs/Miniconda2/envs/shogun/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2297, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/home/me/Programs/Miniconda2/envs/shogun/lib/python3.6/site-packages/shogun/scripts/shogun_bt2_lca.py", line 7, in <module>
    from shogun.utils.last_common_ancestor import build_lca_map
  File "/home/me/Programs/Miniconda2/envs/shogun/lib/python3.6/site-packages/shogun/utils/__init__.py", line 1, in <module>
    from .last_common_ancestor import build_lca_map
  File "/home/me/Programs/Miniconda2/envs/shogun/lib/python3.6/site-packages/shogun/utils/last_common_ancestor.py", line 1, in <module>
    from find_betweens import verify_make_dir
ModuleNotFoundError: No module named 'find_betweens'

The computer system is:

(shogun) -bash-4.1$ uname -a
Linux some.addr.edu 2.6.32-504.23.4.el6.x86_64 #1 SMP Tue Jun 9 20:57:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Thank you!

Shogun function by using kraken & Bracken output

Hi:)

Hello, I am very interested in the tool SHOGUN and want to use SHOGUN for functional annotation.
However, I met some problems because my taxonomy is made with Kraken2 and bracken2.
1). I want to ask if my functional annotation can be done in SHOGUN with a species taxonomy table from kraken2 and bracken2?
2). I found that shogun has two functional annotations, one is functional and the other is summarize_functional. Their functions are both predicting functions from taxonomy. I would like to ask you, what are the specific differences?
3). I tried with the taxonomic table in shogun functional, but it returns that not all the species are recorded, only part of my species are overlapped with SHOGUN/ko-species2ko.txt. I only choose the overlapped ones and run this function, it produces some output.

kneaddata_custom_species_mpa_bacteria_modified.kegg.modules.coverage.txt
kneaddata_custom_species_mpa_bacteria_modified.kegg.modules.txt
kneaddata_custom_species_mpa_bacteria_modified.kegg.pathways.coverage.txt
kneaddata_custom_species_mpa_bacteria_modified.kegg.pathways.txt
kneaddata_custom_species_mpa_bacteria_modified.kegg.txt
kneaddata_custom_species_mpa_bacteria_modified.normalized.txt

However when I tried with shogun summarize_functional, the outputs were all empty.

Could you please give me some suggestions on these questions? Thank you very much for your help and I look forward to hearing from you.

Best Regards,
Lu

SHOGUN silently produces empty output alignment when BURST segfaults

Hey guys,

We've been trying to track down a problem while adapting SHOGUN to Qiita, the symptom of which was finding this message when running integration tests in Travis:

+   File "/home/travis/build/qiita-spots/qp-shotgun/miniconda3/envs/qp-shotgun/lib/python3.5/site-packages/pandas/core/groupby.py", line 2934, in _get_grouper
+     raise KeyError(gpr)
+ KeyError: 'summary'

@antgonza also was having the same error on his OS X install, but neither I (on Barnacle) nor @semarpetrus (on his Linux box) were encountering it.

Running SHOGUN directly using the following commands yielded a good alignment + downstream files on Barnacle:

aln_out=foo.align
database=/home/jgsanders/git_sw/qp-shotgun/qp_shotgun/shogun/databases/shogun
level=species
aligner=burst
threads=8
profile=profile.tsv
aln_out_fp=foo.align/alignment.burst.b6
redistributed="profile.${level}.tsv"
fun_output=functional

shogun align \
--aligner ${aligner} \
--threads ${threads} \
--database ${database} \
--input combined.fna \
--output ${aln_out}

shogun assign_taxonomy \
--aligner ${aligner} \
--database ${database} \
--input ${aln_out_fp} \
--output ${profile}

shogun redistribute \
--database ${database} \
--level ${level} \
--input ${profile} \
--output ${redistributed}

fun_level=$level
shogun functional \
--database ${database} \
--input ${profile} \
--output ${fun_output} \
--level ${fun_level}

where the test database is here and the input data are here

Running the same align command on an OS X box (using Gabe's supplied burst15 binary) ran for a bit and then produced an empty .b6 output file.

Running BURST directly on the OS X box produced the following output:

burst15 --references qp_shotgun/shogun/databases/shogun/burst/5min.edx --queries combined.fna  --output test.b6 --accelerator qp_shotgun/shogun/databases/shogun/burst/5min.acx
This is BURST [v0.99.7LL]
 --> Using accelerator file qp_shotgun/shogun/databases/shogun/burst/5min.acx
Using up to AVX-128 with 8 threads.
 --> [Accel] Accelerator found. Parsing...
 --> [Accel] Total accelerants: 805949 [bytes = 2106932]
 --> [Accel] Reading 0 ambiguous entries

EDB database provided. Parsing...
 --> EDB: Fingerprints are DISABLED
 --> EDB: Parsing compressed headers
 --> EDB: Sheared database (shear size = 515)
 --> EDB: 970 refs [970 orig], 61 clumps, 1030 maxR
Parsed 400000 queries (0.071752). Calculating minMax...
Found min 150, max 150 (0.000109).
Converting queries... Converted (0.007549)
Copying queries... Copied (0.002561)
Sorting queries... Sorted (0.088294)
Copying indices... Copied (0.001531)
Determining uniqueness... Done (0.007544). Number unique: 397338
Collecting unique sequences... Done (0.001721)
Creating data structures... Done (0.004528) [maxED: 4]
Determining query ambiguity... Determined (0.023589)
Creating bins... Created (0.011927); Unambig: 391663, ambig: 5675, super-ambig: 0 [5675,397338,397338]
Re-sorting... Re-sorted (0.194431)
Calculating divergence... Calculated (0.009815) [10.120026 avg div; 150 max]
Fingerprints not enabled
Setting QBUNCH to 16
Using ACCELERATOR to align 397338 unique queries...
Search Progress: [100.00%]
Search complete. Consolidating results...
Segmentation fault: 11

What do you think?

shogun filter - incompatible DB error

Hi,
I'm using shogun filter to decontaminate reads that map to a human genome. I encounter a problem which is an incompatible DB. I found the workaround which is a removal of reads that are longer than 254 bases. After that shogun filter finishes correctly and removes some of reads that were associated with human genome. However, I loose quite a lot of reads when taking only those shorter than 254 bases. Is there any other way to avoid DB incompatibility issue? The shogun filter option uses burst and two databases humanD252.acx and humanD252.edx.
Best,
Mariusz

Add the ability to summarize KEGG tables

Currently, the functional summarization occurs in one step. Some people would like to summarize a KEGG table to the models, rather than predict the modules and kegg table directly from the taxatable.

  • Add a new function to summarize a KEGG table

  • Add the ability to summarize to KEGG pathways

Running SHOGUN according to documentation fails with large datasets

Hi all,

I'm trying to run SHOGUN on ~340 shallow shotgun samples, and running it as intended in the documentation doesn't seem to be working. As far as I can tell, BURST is instantly segfaulting when I give it the combined_seqs.fna (667GB) that I get from shi7. Is there any reason why SHOGUN can't be run on the individual samples with the resulting tables joined at the end?

Thanks,

Stephen

Kegg rare removal cut-off

Hi,

I've used the shogun (burst) pipeline to process a batch of shallow shotgun data. Would you recommend removing rare kegg pathways/modules before conducting statistical analyses? If so, is there a standard cut-off point that should be used?

Thanks,

Z

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.