Git Product home page Git Product logo

ucscgenomebrowser / kent Goto Github PK

View Code? Open in Web Editor NEW
206.0 206.0 84.0 558.65 MB

UCSC Genome Browser source tree. Stable branch: "beta".

Home Page: http://genome.ucsc.edu/

License: Other

Shell 26.07% Awk 0.04% Makefile 1.03% Python 2.73% HTML 23.19% C 41.73% C++ 0.08% AppleScript 0.01% Perl 3.89% CSS 0.32% PostScript 0.21% Pascal 0.01% RenderScript 0.01% Yacc 0.01% Lua 0.02% M4 0.01% Roff 0.04% Gherkin 0.01% Pep8 0.01% AngelScript 0.63%
bioinformatics computational-biology data-visualisation genomics-visualization

kent's People

Contributors

angiehinrichs avatar annzweig avatar beagan-svg avatar been avatar braneyboo avatar brittneydwick avatar carpevida avatar chinhli avatar christopherlee1 avatar cjvillar avatar claymfischer avatar connercpowell avatar diekhans avatar galt avatar gerardoperez1 avatar hartera avatar jimkent avatar jnavarr5 avatar jonathancasper avatar katerose avatar kehayden avatar lnassar avatar maryjgoldman avatar matthewspeir avatar maximilianh avatar mmaddren avatar nullmodel avatar patriciaplchan avatar ucscbrianlee avatar vsmalladi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kent's Issues

Install kent

Hi I am trying to install Kent to use on the server which I don't have root.

The binary I am trying to use is faSplit, faToTwoBit, TwoBitInfo et al.

Either downloading Kent as a whole or the binary, encountering the same error as below

./faSplit: /lib64/libz.so.1: version ZLIB_1.2.3.3' not found (required by ./faSplit) ./faSplit: /lib64/libc.so.6: version GLIBC_2.17' not found (required by ./faSplit)
./faSplit: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./faSplit)

Thanks and please let me know if there is any suggestions.

fixCr compile failure

hi,
I am compiling some userAPPs with local installed openssl.
every thing goes ok until the make application step.

export MACHTYPE=x86_64
cd kent/src/lib

make CFLAGS="-I/home/O_O/local/include/uuid"

cd kent/src/jkOwnLib
make CFLAGS="-I/home/O_O/local/include/uuid"
cd kent/src/htslib
make CFLAGS="-I/home/O_O/local/include/uuid"
cd kent/src/utils/fixCr
make CFLAGS="-I/home/O_O/local/include/uuid"
 
gcc -O -g -o /home/O_O/bin/x86_64/fixCr fixCr.o    ../../lib/x86_64/jkweb.a -L/home/O_O/local/lib -lmysqlclient  -lstdc++ -lrt -L/home/O_O/local/lib -lpthread -lssl -lcrypto ../../htslib/libhts.a -L/home/O_O/local/lib -lpng16 -lm -lz
../../lib/x86_64/jkweb.a(https.o): In function openssl_pthread_setup': /home/O_O/kent/src/lib/https.c:38: undefined reference to CRYPTO_num_locks'
/home/O_O/kent/src/lib/https.c:42: undefined reference to CRYPTO_set_id_callback' /home/O_O/kent/src/lib/https.c:43: undefined reference to CRYPTO_set_locking_callback'
../../lib/x86_64/jkweb.a(https.o): In function openSslInit': /home/O_O/kent/src/lib/https.c:75: undefined reference to SSL_library_init'
/home/O_O/kent/src/lib/https.c:76: undefined reference to ERR_load_crypto_strings' /home/O_O/kent/src/lib/https.c:78: undefined reference to OPENSSL_add_all_algorithms_noconf'
../../lib/x86_64/jkweb.a(https.o): In function netConnectHttpsThread': /home/O_O/kent/src/lib/https.c:109: undefined reference to SSLv23_client_method'
/home/O_O/kent/src/lib/https.c:109: undefined reference to `SSLv23_client_method'
collect2: error: ld returned 1 exit status
make: *** [/home/O_O/bin/x86_64/fixCr] Error 1

how can I fix it ?

path issue for doBlastzChainNet.pl

Sorry for posting for what might be a relatively easy path issue that I have yet to be able to fix. I have been running through the example here http://genomewiki.ucsc.edu/index.php/DoBlastzChainNet.pl#PATH_setup , to make sure everything is working before using my genomes of interest.

All the steps preceding running the actual script work, except that I am running this on my university HPCC, so all of my scripts, bins, and genomes are in a local directory, not the root directory.

When I run
doBlastzChainNet.pl DEF -verbose=10 -noDbNameCheck -workhorse=localhost -bigClusterHub=localhost -skipDownload -dbHost=localhost -smallClusterHub=localhost -trackHub -fileServer=localhost -syntenicNet
I get:

DEF looks OK!
	tDb=dm6
	qDb=GCF_000005575.2_AgamP3
	s1d=/gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/dm6.2bit
	isSelf=
bash: hgsql: command not found
bash: hgsql: command not found
HgStepManager: executing from step 'partition' through step 'syntenicNet'.
HgStepManager: executing step 'partition' Tue Nov 23 17:24:20 2021.
# chmod a+x /gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/trackData/GCF_000005575.2_AgamP3/run.blastz/doPartition.bash
# ssh -x -o 'StrictHostKeyChecking = no' -o 'BatchMode = yes' localhost nice /gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/trackData/GCF_000005575.2_AgamP3/run.blastz/doPartition.bash
+ cd /gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/trackData/GCF_000005575.2_AgamP3/run.blastz
+ /gpfs/scratch/withomas/project_noRoot_MGA/data/scripts/partitionSequence.pl 32100000 10000 /gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/dm6.2bit /gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/dm6.chrom.sizes -xdir xdir.sh -rawDir ../psl 18 -lstDir tParts
lstDir tParts must be empty, but seems to have files  (part062.lst ...)
Command failed:
ssh -x -o 'StrictHostKeyChecking = no' -o 'BatchMode = yes' localhost nice /gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/trackData/GCF_000005575.2_AgamP3/run.blastz/doPartition.bash

So my first question would be, why is my hgsql command not being found?
I have the bin and scripts exported as a path in my bashrc
export PATH=/usr/bin:/usr/sbin:/gpfs/scratch/withomas/project_noRoot_MGA/data/bin:/gpfs/scratch/withomas/project_noRoot_MGA/data/scripts:$PATH
and I am able to use it outside of the script
which hgsql
/gpfs/scratch/withomas/project_noRoot_MGA/data/bin/hgsql
and the path is set in my DEF file

# dm6 vs GCF_000005575.2_AgamP3
PATH=/gpfs/scratch/withomas/project_noRoot_MGA/data/scripts:/gpfs/scratch/withomas/project_noRoot_MGA/data/bin
BLASTZ=/gpfs/scratch/withomas/project_noRoot_MGA/data/bin/lastz-1.04.00
BLASTZ_H=2000
BLASTZ_Y=3400
BLASTZ_L=4000
BLASTZ_K=2200
BLASTZ_Q=/gpfs/scratch/withomas/project_noRoot_MGA/data/lastz/HoxD55.q

# TARGET: D. melanogaster dm6
SEQ1_DIR=/gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/dm6.2bit
SEQ1_LEN=/gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/dm6.chrom.sizes
SEQ1_CHUNK=32100000
SEQ1_LAP=10000
SEQ1_LIMIT=18

# QUERY: GCF_000005575.2_AgamP3
SEQ2_DIR=/gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/trackData/GCF_000005575.2_AgamP3/GCF_000005575.2_AgamP3.2bit
SEQ2_LEN=/gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/trackData/GCF_000005575.2_AgamP3/GCF_000005575.2_AgamP3.chrom.sizes
SEQ2_CHUNK=1000000
SEQ2_LIMIT=2000
SEQ2_LAP=0

BASE=/gpfs/scratch/withomas/project_noRoot_MGA/data/genomes/dm6/trackData/GCF_000005575.2_AgamP3
TMPDIR=/gpfs/scratch/withomas/project_noRoot_MGA/dev/shm

but still no luck.

I've attempted to play around with some of the function in doBlastzChainNet.pl such as loadDef and requirePath , but still haven't been able to figure it out. Any help would be greatly appreciated!

Noting: perhaps it isn't even a path issue, if HgStepManager seems to be working fine?

twoBitToFa: Can only handle version 0 of this file. This is version 1

Hi,
Due to the large size of my genome, i had to create the 2 bit file by faToTwoBit -long. However, twoBitToFa crushed and showed that Can only handle version 0 of this file. This is version 1
I have replaced the old version (2020) by the latest, but it didn't work.

Do you know the reason?

Best regards

Bug : URL redirection does not seem to work properly in UDC

Hello,

rtracklayer relies on the kent library and It seems UDC(URL data cache) does not handle URL redirection properly that is causing lawremi/rtracklayer#42 issue.

I tested udcFileMayOpen function with "http://bedbase.org/api/bed/78c0e4753d04b238fc07e4ebe5a02984/file/bigbedfile"
For ease of testing, I inserted the following code segment in the src/utils/bedToBigBed utility (in the usage function).

void usage()
/* Explain usage and exit. */
{
struct udcFile *udcTestFile =
udcFileMayOpen("http://bedbase.org/api/bed/78c0e4753d04b238fc07e4ebe5a02984/file/bigbedfile", udcDefaultDir());
if (udcTestFile == NULL)
    printf("Not Working\n");
else
    printf("Working\n");
...

After compilation received Not working as output. This function udcInfoViaHttp seems related to the URL direction. I would be happy to help.

Thanks!

Support for chromosomes longer than 2^31

Dear UCSC genome browser team,

I ran into a problem trying to create a 2bit file for a large genome we are currently trying to publish.
There are four scaffolds that are longer than 2^31bp.

scaf01 4922309470 9 50 51
scaf02 4899412387 5020755678 50 51
scaf03 4208277625 10018156322 50 51
scaf04 2887904217 14310599509 50 51

Observed behavior:

faToTwoBit -long -noMask scaffolds_v2.fa scaf.2bit
expandFaFastBuf: integer overflow when trying to increase buffer size from 2147483648 to a min of 51.

Is it easy to set the variable newBufSize to unsigned long instead of unsigned int in order to deal with such long sequences?

static void expandFaFastBuf(int bufPos, int minExp)
Or does it break the front-end, too? It seems that needHugeMem(size_t size) at
void *needHugeMem(size_t size)
can already deal with large amounts of memory so my guess is that it should be a relatively easy fix, unless the front-end cannot deal with such large values.

Thank you very much!
Sergej

Issue with fixCr

Hi there,
when I try to make fixCr, fixCr.o is made but not gotCr.c
this is the output:
/bin/sh: mysql_config: command not found /bin/sh: mysql_config: command not found gcc -O -g -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_GNU_SOURCE -DMACHTYPE_x86_64 -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -I../inc -I../../inc -I../../../inc -I../../../../inc -I../../../../../inc -I../../htslib -I/include -I/usr/include/libpng16 -o fixCr.o -c fixCr.c gcc -O -g -o /home/ashling_charles/bin/x86_64/fixCr fixCr.o ../../lib/x86_64/jkweb.a "-L/usr/lib64 -lmysqlclient" -lstdc++ -lrt -lpthread -lssl -lcrypto ../../htslib/libhts.a -L/usr/lib64 -lpng16 -lm -lz true /home/ashling_charles/bin/x86_64/fixCr

I know without this hal wont work.
Thanks in advance.

browserSetup.sh generates invalid passwords

Hello,

During an installation the following happened:

Processing triggers for ureadahead (0.100.0-19) ...
|
| The Mysql server was installed and therefore has an empty root password.
| Trying to set mysql root password to the randomly generated string "--m179X8"
mysqladmin: [ERROR] unknown option '--m179X8'
| Could not connect to mysql to set the root password to --m179X8.
| A root password must have been set by a previous installation.
| Please reset the root password to an empty password by following these
| instructions: http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html
| Then restart the script.
| Or, if you remember the old root password, write it to a file /root/.my.cnf,
| create three lines
| [client]
| user=root
| password=PASSWORD
| run chmod 600 /root/.my.cnf and restart this script.
ubuntu@ip-10-4-72-51:~$ mysql -u root
ERROR 1698 (28000): Access denied for user 'root'@'localhost'

Couldn't make directory ../trash/udcCache/

We have a LIMS plugin that redirects to the UCSC genome browser track view, and each time we trigger it, we get the error message below (path is truncated). This breaks the visualization.

Couldn't make directory ../trash/udcCache/https/storage.googleapis.com/broad-epi-tracks/track_026251.bwQ3FGoogleAccessIdQ3Dbroad-epiQ2540appspot.gserviceaccount.com

netClass query

Hello,
I have a query about the netClass tool.

netClass
Add classification info to net.
usage:
   netClass [options] in.net tDb qDb out.net
       tDb - database to fetch target repeat masker table information
       qDb - database to fetch query repeat masker table information

It's not clear what the format of the database files is. Do you have any further information on how to make these files (tDb and qDb) required to use this tool?

Thanks!
Laura

rudpSend timed out & profile db not found in sqlProfileToMyCnf()

First of all, thanks for making all these scripts to generate chain files. I tried to use these script to generate a chain file from a customized genome assembly to hg19.

Now I have ran into some issues by following this tutorial: http://genomewiki.ucsc.edu/index.php/DoSameSpeciesLiftOver.pl
at the step of running DoSameSpeciesLiftOver.pl. ( I used localhost instead of other host names)

Here is the error log:

**profile db not found in sqlProfileToMyCnf() -- failed for file /home/yangyxt/.hgsql.cnf-cYDq6z failed with errno 2**
HgStepManager: executing from step 'align' through step 'cleanup'.
HgStepManager: executing step 'align' Tue Oct  6 16:13:05 2020.
Using localhost, /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/final_contigs.2bit and /paedwy/disk1/yangyxt/indexed_genome/ucsc.hg19.2bit
# chmod a+x /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/run.blat/job.csh
# chmod a+x /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/run.blat/doAlign.csh
# ssh -x -o 'StrictHostKeyChecking = no' -o 'BatchMode = yes' localhost nice /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/run.blat/doAlign.csh

cd /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/run.blat
rm -rf tParts
/paedwy/disk1/yangyxt/ngs_scripts/kent/src/hg/utils/automation/partitionSequence.pl 10000000 0 /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/final_contigs.2bit /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/final_contigs.chrom.sizes 2000 -lstDir=tParts
rm -rf qParts
/paedwy/disk1/yangyxt/ngs_scripts/kent/src/hg/utils/automation/partitionSequence.pl 10000000 0 /paedwy/disk1/yangyxt/indexed_genome/ucsc.hg19.2bit /paedwy/disk1/yangyxt/indexed_genome/ucsc.hg19.chrom.sizes 1000 -lstDir=qParts
mkdir /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/run.blat/psl
foreach f ( `cat t.lst` )
cat t.lst
mkdir /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/run.blat/psl/part000.lst
end
gensub2 t.lst q.lst gsub jobList
para make jobList
**rudpSend timed out**
**pmSendString timed out!**
pmSendString: will sleep 60 seconds and retry
rudpSend timed out
pmSendString timed out!
pmSendString: will sleep 60 seconds and retry
rudpSend timed out
pmSendString timed out!
pmSendString: will sleep 60 seconds and retry
rudpSend timed out
pmSendString timed out!
pmSendString: will sleep 60 seconds and retry
rudpSend timed out
pmSendString timed out!
Command failed:
ssh -x -o 'StrictHostKeyChecking = no' -o 'BatchMode = yes' localhost nice /paedwy/disk1/yangyxt/wesplus/WESplus_test_data_2/aligned_results/spades_test_NCF1_NCF1C_NCF1B/K127/run.blat/doAlign.csh

Blat not finding Sequence

The command line version of blat is not finding a sequence in the human genome. I'm able to locate that same sequence on the UCSC Genome Browser version of blat.

I downloaded the Linux command line version of Blat (blat - Standalone BLAT v. 37x1 fast sequence search command line tool) and hg38.2bit from https://hgdownload.cse.ucsc.edu/goldenpath/hg38/bigZips/hg38.2bit.

The specific sequence I want to query is

>query1
GGTTTCGCAGATTTTTCCCGACTCTGTAATGTTGGCGGTGCAGGAAGGGATTGACTTACTCACTTTTCCGCCGGCGCCCGGTTCTCCGGAGCCGCCTCACCTTTCCCGGCAGCCCGAGCAGCCGGAGCAGAGAGCCTTGGGTCCGGTTTCTATGCCAAACCTTGTACCGGAGGTGATCGATCTTACCTGCCACGAGGCTTCCACCCAGTGACGACGAGGATGAAGAGGGTGAGGAGTTTGTGTTAGATTATGTGGAGCACCCCGGGCACGGTTGCAGGTCTTGTCATTATCACCGGAGGAATACGGGGGACCCAGATATTATGTGTTCGCTTTGCTATATGAGGACCTGTGGCATGTTTGTCTACA

Here's the blat command I'm using:
./blat -noHead -out=psl ~/Downloads/hg38.2bit query.fa output.psl

The command runs but nothing gets output to the output.psl file. The command works with other human genome sequences.

Here are the results from the BLAT UCSC genome browser:
image

It finds the sequence on the browser tool but not the command line version. I've tried hg38.2bit, hg19.2bit, and the whole genome fasta files for hg19 and hg38. Any suggestions on why the command line tool isn't finding this sequence?

module in src/hg can't make with MySQL 8^

make in kent/src/hg directory, shows up errors:

jksql.c: In function ‘sqlConnRemoteFillIn’:
jksql.c:1121:5: error: unknown type name ‘my_bool’; did you mean ‘_Bool’?
 1121 |     my_bool flag = TRUE;
      |     ^~~~~~~
      |     _Bool
jksql.c:1122:25: error: ‘MYSQL_OPT_SSL_VERIFY_SERVER_CERT’ undeclared (first use in this function); did you mean ‘CLIENT_SSL_VERIFY_SERVER_CERT’?
 1122 |     mysql_options(conn, MYSQL_OPT_SSL_VERIFY_SERVER_CERT, &flag);
      |                         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                         CLIENT_SSL_VERIFY_SERVER_CERT
jksql.c:1122:25: note: each undeclared identifier is reported only once for each function it appears in

seems like not work well with MySQL 8 ?

Unable to complete installation on fresh Ubuntu 16.04 install

Hello,

I've been trying for a couple of days now to get these applications running on a new machine with a fresh installation of Ubuntu 16.04. I have followed the instructions in the readme, but when I attempt to make any of the individual applications (for instance, fixCr, as suggested in the readme), I get the following error:

~/Installs/kent/src/utils/fixCr$ sudo make
gcc -O -g -o /home/jc/bin/x86_64/fixCr fixCr.o    ../../lib/x86_64/jkweb.a -L/usr/lib/x86_64-linux-gnu -lmysqlclient -lpthread -lz -lm -lrt -ldl -lstdc++ -lrt -lm ../../htslib/libhts.a -pthread -L/lib -lssl -lcrypto -lpng12 -lm
gcc: error: ../../htslib/libhts.a: No such file or directory
../../inc/userApp.mk:31: recipe for target '/home/jc/bin/x86_64/fixCr' failed
make: *** [/home/jc/bin/x86_64/fixCr] Error 1

I was able to resolve this error by going to the htslib directory and doing make. The readme did not indicate that I needed to do that, so I'm wondering if that was an oversight or if this is an indication that the makes I did in previous parts of the readme somehow failed without alerting me. They did give a couple of warnings, but they didn't seem to outright fail, as far as I could tell.

However, I now receive the following error when trying to proceed:

~/Installs/kent/src/utils/fixCr$ sudo make
gcc -O -g  -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_GNU_SOURCE -DMACHTYPE_x86_64 -DUSE_SSL   -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -I../inc -I../../inc -I../../../inc -I../../../../inc -I../../../../../inc -I../../htslib -I/include -I/usr/include/libpng12  -o fixCr.o -c fixCr.c
gcc -O -g -o /home/jc/bin/x86_64/fixCr fixCr.o    ../../lib/x86_64/jkweb.a -L/usr/lib/x86_64-linux-gnu -lmysqlclient -lpthread -lz -lm -lrt -ldl -lstdc++ -lrt -lm ../../htslib/libhts.a -pthread -L/lib -lssl -lcrypto -lpng12 -lm
/usr/bin/ld: cannot open output file /home/jc/bin/x86_64/fixCr: No such file or directory
collect2: error: ld returned 1 exit status
../../inc/userApp.mk:31: recipe for target '/home/jc/bin/x86_64/fixCr' failed
make: *** [/home/jc/bin/x86_64/fixCr] Error 1

I have verified that /bin/x86_64 is in my path, so I'm a bit lost as to what's going on here... Please let me know if you have any suggestions. I would be happy to append files of the output from all of these makes if that would help.

Best,
Jeremy

Floating point error resulting in incorrect liftover results

Hi, I have noticed the following bug when running the liftover executable (and via https://genome.ucsc.edu/cgi-bin/hgLiftOver):

Using the chain file from ftp://hgdownload.cse.ucsc.edu/goldenPath/hg19/liftOver/hg19ToHg38.over.chain.gz, a bed file whose contents is a single line reading chr10 18122748 18122818, and -minMatch=0.4, I expect a successful liftover.

Instead, the result is an error with the message "Partially deleted in new".

This should be a success because:

  • The region has a length of 70
  • The region intersects two chains: id 12, with intersectSize = 0, and id 65781 with intersectSize = 28.
  • 28 / 70 = 0.4

The failure occurs because minMatchSize is incorrectly calculated as 28.000000417232513, so intersectSize >= minMatchSize is false.
minMatchSize is incorrect because the input minMatch is actually 0.40000000596046448 instead of 0.4.

This can be fixed with by changing minMatch from a float to a double in src/hg/liftOver/liftOver.c. It also looks like minBlocks should have the same change applied. See below for a diff containing the fix:

diff --git a/src/hg/liftOver/liftOver.c b/src/hg/liftOver/liftOver.c
index 6f2a3a5330..2539fb9992 100644
--- a/src/hg/liftOver/liftOver.c
+++ b/src/hg/liftOver/liftOver.c
@@ -32,10 +32,10 @@ static struct optionSpec optionSpecs[] = {
     {"genePred", OPTION_BOOLEAN},
     {"gff", OPTION_BOOLEAN},
     {"hasBin", OPTION_BOOLEAN},
-    {"minBlocks", OPTION_FLOAT},
+    {"minBlocks", OPTION_DOUBLE},
     {"minChainQ", OPTION_INT},
     {"minChainT", OPTION_INT},
-    {"minMatch", OPTION_FLOAT},
+    {"minMatch", OPTION_DOUBLE},
     {"minSizeQ", OPTION_INT},
     {"minSizeT", OPTION_INT},
     {"multiple", OPTION_BOOLEAN},
@@ -188,8 +188,8 @@ double minMatch = LIFTOVER_MINMATCH;
 double minBlocks = LIFTOVER_MINBLOCKS;

 optionInit(&argc, argv, optionSpecs);
-minMatch = optionFloat("minMatch", minMatch);
-minBlocks = optionFloat("minBlocks", minBlocks);
+minMatch = optionDouble("minMatch", minMatch);
+minBlocks = optionDouble("minBlocks", minBlocks);
 fudgeThick = optionExists("fudgeThick");
 multiple = optionExists("multiple");
 noSerial = optionExists("noSerial");

pngwrite.c:7:87: fatal error: png.h: No such file or directory

when I go to ./src/lib and type make :

`
genemind1@iZuf636tvvpjh8fy0ety3pZ:~/kent/src/lib$ make
/bin/sh: 1: libpng-config: not found

/bin/sh: 1: libpng-config: not found

/bin/sh: 1: mysql_config: not found

/bin/sh: 1: mysql_config: not found

gcc -O -g -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_GNU_SOURCE -DMACHTYPE_ucscKent -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -I../inc -I../../inc -I../../../inc -I../../../../inc -I../../../../../inc -I../htslib -I/include -o pngwrite.o -c pngwrite.c
pngwrite.c:7:87: fatal error: png.h: No such file or directory
#include "png.h" // MUST come before common.h, due to setjmp checking in pngconf.h
^
compilation terminated.
make: *** [pngwrite.o] Error 1
`

Question about twoBit.c / java

Hi the UCSC team,

I'm currently writing a PR for the "Java API for high-throughput sequencing data (HTS) formats". htsjdk project .

The goal of my PR samtools/htsjdk#1417 is to write a java code handling the '.2bit' format. My java code largely inspired by your C code twoBit.c

  1. are you ok with including my code in the htsjdk project ? should I add any specific license (currently MIT) or any author in my code ?

  2. a technical question: I need to build a SequenceDictionary where the order of the contigs must be the same than in the input fasta.
    When faToTwoBit builds a '.2bit' file, is the order of the sequences in the original fasta file always the same than in the '.2bit' file (at this position, when reading : https://github.com/ucscGenomeBrowser/kent/blob/master/src/lib/twoBit.c#L658 ) or is there any re-ordering by a hash-table ?

Thank you,

Pierre

knetUdc.c does not compile

Hi,
I cloned the current git (91029f6) and tried to build the lib via

cd src/lib
make

Unfortunately this does not build successfully:

cc -O -g  -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_GNU_SOURCE -DMACHTYPE_x86_64   -Wall -Wformat -Wimplicit -Wreturn-type -   Wuninitialized -I../inc -I../../inc -I../../../inc -I../../../../inc -I../../../../../inc -I../htslib -I/include -I/usr/include/libpng16  -o knetUdc.o -c knetUdc.c
knetUdc.c: In function ‘kuOpen’:
knetUdc.c:25:3: error: ‘knetFile {aka struct knetFile_s}’ has no member named ‘udcf’
 kf->udcf = udcf;
   ^~
knetUdc.c:26:57: error: ‘knetFile {aka struct knetFile_s}’ has no member named ‘udcf’
 verbose(2, "kuOpen: returning %lu\n", (unsigned long)(kf->udcf));
                                                         ^~
knetUdc.c: In function ‘kuRead’:
knetUdc.c:40:59: error: ‘knetFile {aka struct knetFile_s}’ has no member named ‘udcf’
 verbose(2, "udcRead(%lu, buf, %lld)\n", (unsigned long)(fp->udcf), (long long)len);
                                                           ^~
knetUdc.c:41:25: error: ‘knetFile {aka struct knetFile_s}’ has no member named ‘udcf’
 return (off_t)udcRead(fp->udcf, buf, (int)len);
                         ^~
knetUdc.c: In function ‘kuSeek’:
knetUdc.c:53:29: error: ‘knetFile {aka struct knetFile_s}’ has no member named ‘udcf’
     offset = off+ udcTell(fp->udcf);
                             ^~
knetUdc.c:56:54: error: ‘knetFile {aka struct knetFile_s}’ has no member named ‘udcf’
 verbose(2, "udcSeek(%lu, %lld)\n", (unsigned long)(fp->udcf), offset);
                                                      ^~
knetUdc.c:57:11: error: ‘knetFile {aka struct knetFile_s}’ has no member named ‘udcf’
 udcSeek(fp->udcf, offset);
           ^~

Kind regards

     Andreas.

Question regarding bedToBigBed versioning

The ENCODE project is trying to register various versions of bedToBigBed used by different pipelines, and I'm having trouble understanding the different versioning conventions used. We have a 2.6, 2.7, 369, and 377 version currently. As you can see here: https://www.encodeproject.org/software/bedToBigBed/

It seems like bedToBigBed was updated to 2.7 some time in 2015: 9febf81#diff-f45d1e877202ab7a14ff9561296e0d87

If those who are stating they are using the bedToBigBed v. 2.7 - can we point to this http://hgdownload.soe.ucsc.edu/admin/exe/linux.x86_64.v369/ which was released in 2018? Or one of the official GitHub releases? I would just like to know what is the best way to refer/link to bedToBigBed v. 2.7 through a repo or official download.

Thank you for your help!

Account creation error on Ubuntu0.16.04.1 with mysql 5.7.16

Hi
The default settings of mysql Server version: 5.7.16-0ubuntu0.16.04.1 (Ubuntu) prevents account creation, it returns an sql error.

Warning/Error(s):
Can't start query:
INSERT INTO gbMembers SET userName='testtt',realName='testtt',password=':B:s576OmtG:abbe74926fe1b60e70454d09ee444415',email='[email protected]', lastUse=NOW(),accountActivated='Y'
mySQL error 1364: Field 'newPassword' doesn't have a default value (profile=, host=localhost, db=hgcentral)

The problem comes from the sql_mode default value which is:

ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

Editing /etc/mysql/my.cnf by adding

[mysqld] sql_mode =

Solved the issue.

undefined reference

According to your READ file,
GENERAL INSTALL INSTRUCTIONS , the step 7, go to the directory src/utils/fixCr , then make:
but some errors about undefined reference(detailed in file named error.txt), how can I fix it ? OR I did something wrong?
error.txt

Finally , I wanna use the twoBitToFa utility to extract fasta from 2bit file, how about the difference of bioconda of ucsc-twobittofa ?
thank a lot.

profile db not found in sqlProfileToMyCnf()

Thank you for making all these bioinfo scripts, they are very useful!
I'm constructing some karyotype chromosomes now using some de-novo genome assembly, and I'm trying to adopt the doBlastzChainNet.pl into my pipeline, following a manuscript from http://genomewiki.ucsc.edu/index.php/DoBlastzChainNet.pl#The_new_streamline_pairLastz_script.

I've already started a localhost parasol-hub according to a manuscript from http://genomewiki.ucsc.edu/index.php/Parasol_job_control_system.
Then I started the doBlastzChainNet.pl, and it ran for about 30 minutes before crushes. It produced a log file named do.log.

I check the log file, found two parts may be indicating an error, and they are as bellow:

  1. do.log line8~line13
    profile db not found in sqlProfileToMyCnf() -- failed for file /GPUFS/sysu_mhwang_1/.hgsql.cnf-m2VqGT failed with errno 2
    profile db not found in sqlProfileToMyCnf() -- failed for file /GPUFS/sysu_mhwang_1/.hgsql.cnf-C7Hy3D failed with errno 2
    HgStepManager: executing from step 'partition' through step 'syntenicNet'.
    HgStepManager: executing step 'partition' Wed Dec 29 09:02:38 2021.
    sort: write failed: 'standard output': Broken pipe
    sort: write error

  2. do.log line536~
    gensub2 ptri.lst pcla.lst gsub jobList
    para make jobList
    Checking input files
    14535 jobs written to /GPUFS/sysu_mhwang_1/sysu_mhwang_1/zwei_liuziwei/02_parasol/data/genomes/run.blastz/batch
    14535 jobs in batch
    0 jobs (including everybody's) in Parasol queue or running.
    Checking finished jobs
    updated job database on disk
    Pushed Jobs: 14535
    total sick machines: 1 failures: 5
    ================
    Checking job status 0 minutes after launch
    14535 jobs in batch
    0 jobs (including everybody's) in Parasol queue or running.
    Sick Batch: consecutive crashes (36) >= sick batch threshold (25)
    Checking finished jobs
    updated job database on disk
    total sick machines: 1 failures: 36
    Sick batch! will sleep 10 minutes, clear sick nodes and retry
    Told hub to clear sick nodes
    ...
    ...
    ...
    ================
    Checking job status 33 minutes after launch
    14535 jobs in batch
    0 jobs (including everybody's) in Parasol queue or running.
    Sick Batch: consecutive crashes (36) >= sick batch threshold (25)
    Checking finished jobs
    updated job database on disk
    Batch failed after 4 tries on /GPUFS/sysu_mhwang_1/sysu_mhwang_1/zwei_liuziwei/02_parasol/data/scripts/blastz-run-ucsc -outFormat psl /GPUFS/sysu_mhwang_1/sysu_mhwang_1/zwei_liuziwei/02_parasol/data/genomes/ptri.2bit:NC_059295.1:0-32110000 /GPUFS/sysu_mhwang_1/sysu_mhwang_1/zwei_liuziwei/02_parasol/data/genomes/pcla.2bit:LG03:0-10000000 ../DEF ../psl/ptri.2bit:NC_059295.1:0-32110000/ptri.2bit:NC_059295.1:0-32110000_pcla.2bit:LG03:0-10000000.psl
    Command failed:
    ssh -x -o 'StrictHostKeyChecking = no' -o 'BatchMode = yes' localhost nice /GPUFS/sysu_mhwang_1/sysu_mhwang_1/zwei_liuziwei/02_parasol/data/genomes/run.blastz/doClusterRun.csh

real 33m9.299s
user 0m0.082s
sys 0m0.069s

  • OS: Linux
  • Browser Putty
    - Version Ubuntu 18.04.5
**Additional context**
I also checked my parasol log file, and in the file 192.168.206.146.2021-12-29T07\:56.log.node:

2021/12/29 07:56:39: info: starting paraNode on zwei-mysql5-29153733-5c7c7d66c4-gdqnz
2021/12/29 08:26:10: warn: duplicate packet filtered out: 192.168.206.146-9694-104-1
2021/12/29 08:26:10: warn: duplicate packet filtered out: 192.168.206.146-9694-105-1
2021/12/29 08:26:10: warn: duplicate packet filtered out: 192.168.206.146-9694-103-1
2021/12/29 08:26:10: warn: duplicate packet filtered out: 192.168.206.146-9694-102-1
2021/12/29 08:26:10: warn: duplicate packet filtered out: 192.168.206.146-9694-101-1
2021/12/29 08:26:10: warn: duplicate packet filtered out: 192.168.206.146-9694-107-1
2021/12/29 08:26:10: warn: duplicate packet filtered out: 192.168.206.146-9694-108-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-136-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-138-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-139-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-137-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-140-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-143-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-142-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-141-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-144-1
2021/12/29 09:03:10: warn: duplicate packet filtered out: 192.168.206.146-9694-145-1
2021/12/29 09:13:40: warn: duplicate packet filtered out: 192.168.206.146-9694-175-1
2021/12/29 09:13:40: warn: duplicate packet filtered out: 192.168.206.146-9694-176-1
2021/12/29 09:13:40: warn: duplicate packet filtered out: 192.168.206.146-9694-178-1
2021/12/29 09:13:40: warn: duplicate packet filtered out: 192.168.206.146-9694-177-1
2021/12/29 09:13:40: warn: duplicate packet filtered out: 192.168.206.146-9694-179-1
2021/12/29 09:13:40: warn: duplicate packet filtered out: 192.168.206.146-9694-180-1
2021/12/29 09:13:40: warn: duplicate packet filtered out: 192.168.206.146-9694-181-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-211-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-205-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-207-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-206-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-212-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-209-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-208-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-210-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-213-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-214-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-215-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-216-1
2021/12/29 09:24:10: warn: duplicate packet filtered out: 192.168.206.146-9694-217-1
2021/12/29 09:34:55: warn: duplicate packet filtered out: 192.168.206.146-9694-246-1
2021/12/29 09:34:55: warn: duplicate packet filtered out: 192.168.206.146-9694-248-1
2021/12/29 09:34:55: warn: duplicate packet filtered out: 192.168.206.146-9694-249-1
2021/12/29 09:34:55: warn: duplicate packet filtered out: 192.168.206.146-9694-250-1
2021/12/29 09:34:55: warn: duplicate packet filtered out: 192.168.206.146-9694-251-1
2021/12/29 09:34:55: warn: duplicate packet filtered out: 192.168.206.146-9694-252-1
2021/12/29 09:34:55: warn: duplicate packet filtered out: 192.168.206.146-9694-253-1

GBIC on-the-fly mode with mRNA tracks fails in pack mode

Hello,
I observe a strange behavior in our ucsc genome browser (GBIC)

We have a full-mirrored dm3 genome and run the ucsc in on-the-fly mode (-f), if you select a larger region (>1 million) in the top chromosome-navigation ("jump to new region") the whole ucsc browser stucks and sometimes displays an error:

"Encountered a network error. Please try again. If the problem persists, please check your network connection."

and these lines in the httpd error log:

[Sun May 05 14:24:34.869274 2019] [cgi:warn] [pid 76103] [client 10.42.211.120:62659] AH01220: Timeout waiting for output from CGI script /usr/local/apache/cgi-bin/hgTracks, referer: https://ucsc.mysite.com/cgi-bin/hgTracks?db=dm3&lastVirtModeType=default&lastVirtModeExtraState=&virtModeType=default&virtMode=0&nonVirtPosition=&position=chr2L%3A15135875%2D20656016&hgsid=753_juILa2OSPbgbUsAhi7fSwb5Mirel
[Sun May 05 14:24:34.869382 2019] [core:error] [pid 76103] (70007)The timeout specified has expired: [client 10.42.211.120:62659] AH00574: ap_content_length_filter: apr_bucket_read() failed, referer: https://ucsc.mysite.com/cgi-bin/hgTracks?db=dm3&lastVirtModeType=default&lastVirtModeExtraState=&virtModeType=default&virtMode=0&nonVirtPosition=&position=chr2L%3A15135875%2D20656016&hgsid=753_juILa2OSPbgbUsAhi7fSwb5Mirel

That only happens if the track "D. melanogaster mRNAs" is set to "pack", dense mode always works.

if the ucsc is in the offline mode (-o) everythings works perfectly fine, for both dense and pack.

Is it an intended behavior that the browser tries to download always the mRNA tracks from upstream even if they are mirrored?

(I already got the newest gbic-script, cgi-software and genome updates - it had no effect on the behavior)

Best,
Klaus

Please support COVID-19 BioHackathon and choose a free license for parts of your code to build blat

Hi,
we had some past discussion about the licensing of the complete code of this repository. I've understood that you did not consider this. However, parts of the libraries are featuring an MIT license and I would like you to consider to change the license of only those parts that might enable us to package blat for Debian. Debian is currently joing the COVID-19 Biohackathon and when doing so we intend to package everything that could contribute to fight this disease. We would greatly appreciate your contribution to this fight.
I have extracted the following code parts as beeing necessary to successfully build blat under Debian:

src/blat/*
src/jkOwnLib/*

The other directories that are needed for building are MIT licensed. Once you are at it for other projects we could also use

src/isPcr/*

While this is of lower importance as blat it would be great to choose MIT or whatever free license you might decide for.
I would really appreciate your decision to change the license of this part of your code and the Debian community would be thankful for this contribution.
Kind regards
Andreas.

mafToPsl out of range

Hi,

I used the current version (github) from mafToPsl of convert a maf file generated by GSAlign to psl. However, the tool ended with the error:
"Coordinates out of range line 103592 of B73v5.CML322.50.200.maf"

I will post the line that causes the error (103592) plus the line prior in the next message but I think mafToPSL is maybe incorrect as maf is 1 based. The line that causes the issue starts with:
s qry.chr1 304926865 658 + 304927522

If you add 304926865 (Start) + 658 (Alignment) -1 (1based) = 304927522, this is the exact end of the source length of 304927522. Is a possible that mafToPsl is off by 1 bp (0 based)?

FaToVcf reference issue.

I'm using FaToVcf to extract the SNPs from Multiple alignment format, but even if I give as input the name of correct ref sequence, in the vcf file the reference is completely wrong, it seems as the tool use as reference a consensus sequence.

Any solutions to solve this problem? I need to extract SNPs, but using the original reference otherwise it makes no sense.

can't build Bio-BigFile

Hello,

I am trying to install Bio-BigFile-1.07 and consistently getting this error after ./Build

Building Bio-BigFile
cc -shared -O2 -L/usr/local/lib -fstack-protector -o blib/arch/auto/Bio/DB/BigFile/BigFile.so lib/Bio/DB/BigFile.o ~/kent/src/lib/x86_64/jkweb.a -lz -lssl
/usr/bin/ld: ~/kent/src/lib/x86_64/jkweb.a(bbiRead.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
~/kent/src/lib/x86_64/jkweb.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
error building blib/arch/auto/Bio/DB/BigFile/BigFile.so from lib/Bio/DB/BigFile.o at /software/perl/5.24.1/lib/5.20.2/ExtUtils/CBuilder/Base.pm line 323.

I updated the CFLAGS=-fPIC in kent/src/inc/common.mk and run make in kent/src/lib
Also followed the below instructions:

export MACHTYPE=x86_64
export PATH="~/bin/$MACHTYPE:$PATH

But nothing works. It'd be great if you can help me about this.

Missing license for font mgSixhi6.c

Hi,
since I intend to package parts of your code I was checking single licenses of files. I've found for most of the fonts you are adding

Copyright: 1984-1989, 1994 Adobe Systems Incorporated.,
           1988, 1994 Digital Equipment Corporation.
License: Adobe

and for the file src/lib/font/mgSail8.c I've found

Copyright: 2011 LatinoType Limitada <[email protected]>
License: SILOpenFontLicense

but for the file src/lib/font/mgSixhi6.c I failed to find any license statement. At least for Debian this would be a killer feature since no license means non-distributable. May be you know where this file is from and it would be great if you could add this to your documentation.
Kind regards, Andreas.

bamToPsl semantic bug

Hi,

I'm reporting a semantic error for bamToPsl: When parsing BAM files where supplementary alignments are included but have been hard clipped (with the BAM CIGAR H operator), the length of the query sequence is incorrectly inferred. When inferring the actual length of the query sequence for reporting Q size in the PSL file, soft clipping (BAM CIGAR S) and hard clipping operators need to be considered. This should added as an option flag for the user, at the very least.

For example, my output bamToPsl PSL output from parsing a test BAM file generated by minimap2 yields:

psLayout version 3

match	mis- 	rep. 	N's	Q gap	Q gap	T gap	T gap	strand	Q        	Q   	Q    	Q  	T        	T   	T    	T  	block	blockSizes 	qStarts	 tStarts
     	match	match	   	count	bases	count	bases	      	name     	size	start	end	name     	size	start	end	count
---------------------------------------------------------------------------------------------------------------------------------------------------------------
1000	0	0	0	0	0	0	0	+	query	1000	0	1000	target	10000	0	1000	1	1000,	0,	0,
8000	0	0	0	0	0	0	0	+	query	10000	2000	10000	target	10000	2000	10000	1	8000,	2000,	2000,

Best,
Jessen

Certificate validation (https.c) causing issues with build in v424

Describe the bug
After updating to the latest kent_base version (424) out build script failed with error. On 423 everything working (logs below).

https.c: In function 'verify_callback':
https.c:322:47: error: dereferencing pointer to incomplete type 'X509_STORE_CTX {aka struct x509_store_ctx_st}'
     X509_NAME_oneline(X509_get_issuer_name(ctx->current_cert), buf, 256);
                                               ^~
At top level:
https.c:28:13: warning: 'openssl_locking_callback' defined but not used [-Wunused-function]
 static void openssl_locking_callback(int mode, int n, const char * file, int line)
             ^~~~~~~~~~~~~~~~~~~~~~~~
https.c:23:22: warning: 'openssl_id_callback' defined but not used [-Wunused-function]
 static unsigned long openssl_id_callback(void)
                      ^~~~~~~~~~~~~~~~~~~

To Reproduce
Steps to reproduce the behavior:

docker run --rm -it ubuntu:18.04
apt update -yy && apt install -yy  wget build-essential libssl-dev
wget https://github.com/ucscGenomeBrowser/kent/archive/refs/tags/v424_base.tar.gz && tar -xzf v424_base.tar.gz && cd kent-424_base/src/lib/
cc -O -g "-fPIC" -std=c99 -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_GNU_SOURCE -DMACHTYPE_x86_64 -DUSE_HIC   -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -I../inc -I../../inc -I../../../inc -I../../../../inc -I../../../../../inc -I../htslib  -I/include -I/usr/include/libpng16  -o https.o -c https.c

Expected behavior

wget https://github.com/ucscGenomeBrowser/kent/archive/refs/tags/v423_base.tar.gz && tar -xzf v423_base.tar.gz && cd kent-423_base/src/lib/
cc -O -g "-fPIC" -std=c99 -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_GNU_SOURCE -DMACHTYPE_x86_64 -DUSE_HIC   -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -I../inc -I../../inc -I../../../inc -I../../../../inc -I../../../../../inc -I../htslib  -I/include -I/usr/include/libpng16  -o https.o -c https.c

https.c:27:13: warning: 'openssl_locking_callback' defined but not used [-Wunused-function]
 static void openssl_locking_callback(int mode, int n, const char * file, int line)
             ^~~~~~~~~~~~~~~~~~~~~~~~
https.c:22:22: warning: 'openssl_id_callback' defined but not used [-Wunused-function]
 static unsigned long openssl_id_callback(void)
                      ^~~~~~~~~~~~~~~~~~~

Desktop (please complete the following information):

  • OS: Ubuntu 18.04 docker container
# uname -a
Linux af8f990541cf 5.4.0-80-generic #90~18.04.1-Ubuntu SMP Tue Jul 13 19:40:02 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

My attempt to find solution of this problem led me to these links:

Looks like ctx->current_cert should be changed.

Using err_cert as in commit gives me error:

https2.c: In function 'verify_callback':
https2.c:323:44: error: 'err_cert' undeclared (first use in this function); did you mean 'errAbort'?
     X509_NAME_oneline(X509_get_issuer_name(err_cert), buf, 256);
                                            ^~~~~~~~
                                            errAbort
https2.c:323:44: note: each undeclared identifier is reported only once for each function it appears in

Using errAbort as suggested gives me warnings:

https2.c: In function 'verify_callback':
https2.c:323:44: warning: passing argument 1 of 'X509_get_issuer_name' from incompatible pointer type [-Wincompatible-pointer-types]
     X509_NAME_oneline(X509_get_issuer_name(errAbort), buf, 256);
                                            ^~~~~~~~
In file included from /usr/include/openssl/ssl.h:20:0,
                 from https2.c:6:
/usr/include/openssl/x509.h:640:12: note: expected 'const X509 * {aka const struct x509_st *}' but argument is of type 'void (*)(char *)'
 X509_NAME *X509_get_issuer_name(const X509 *a);
            ^~~~~~~~~~~~~~~~~~~~

Consistent documentation for kent tools

Hi, I was wondering if it's possible to adopt a consistent documentation scheme for kent tools as found here -- http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/FOOTER

The reason I'm making this request because Bioconda recipe script parses this document to automatically create packages for distributing the tools. However, inconsistency requires case-by-case parsing of specific programs making maintainability non-trivial.

Few examples,

  1. Some tools have the following comment string missing
    ### kent source version 394 ###
  2. Some tools have no description -- an example with no description
    ================================================================
    ========   bedJoinTabOffset   ====================================
    ================================================================
    usage:
       bedJoinTabOffset inTabFile inBedFile outBedFile
    
  3. Some have unexplained comments
    ================================================================
    ========   bedPartition   ====================================
    ================================================================
    ### kent source version 394 ###
    Error: wrong # args
    

If there's an alternative way to obtain these descriptions, let me know. I'm happy to help as well. Thanks!

bamToPsl v357_base compile error

I have problems compiling bamToPsl. Error output pasted at the end. I saw there were changes to htslib, could this have an impact here?

My target was v357, but I also tried to checkout the head of the repository and the problem persists.

cd bamToPsl && echo bamToPsl && make
bamToPsl
make[1]: Entering directory `/TL/deep-share/archive00/software/packages/kent_source/git/kent/src/utils/bamToPsl'
gcc -O -g -o /home/karln/bin/x86_64/bamToPsl bamToPsl.o    ../../lib/x86_64/jkweb.a -L/usr/lib/x86_64-linux-gnu -lmysqlclient -lpthread -lz -lm -lrt -ldl -lstdc++ -lrt -lpthread -L/lib -lssl -lcrypto ../../htslib/libhts.a -L/usr/lib/x86_64-linux-gnu -lpng12 -lz -lm
../../lib/x86_64/jkweb.a(bamFile.o): In function `bamFetchAlreadyOpen':
/TL/deep-share/archive00/software/packages/kent_source/git/kent/src/lib/bamFile.c:168: undefined reference to `cram_get_Md5'
/TL/deep-share/archive00/software/packages/kent_source/git/kent/src/lib/bamFile.c:175: undefined reference to `cram_get_ref_url'
/TL/deep-share/archive00/software/packages/kent_source/git/kent/src/lib/bamFile.c:176: undefined reference to `cram_get_cache_dir'
../../lib/x86_64/jkweb.a(bamFile.o): In function `bamAndIndexFetchPlus':
/TL/deep-share/archive00/software/packages/kent_source/git/kent/src/lib/bamFile.c:218: undefined reference to `cram_set_cache_url'
collect2: error: ld returned 1 exit status
make[1]: *** [/home/karln/bin/x86_64/bamToPsl] Error 1
make[1]: Leaving directory `/TL/deep-share/archive00/software/packages/kent_source/git/kent/src/utils/bamToPsl'
make: *** [bamToPsl.all] Error 2

Correct Citation to Use

I have used these tools and would like to cite them. Can you provide info on the correct citation to use?

faSplit produces thousands of files

Hi,
I am trying to split a large fasta file containing a genome assembly. The command I am using is:

faSplit sequence myGenome.fa 20 myGenome_

The command produces thousands of files. The file is quite large, 32G, so I believe there there is something related to that

Patch to build with current htslib

Hi,
I'd like to continue the discussion from issue #13 in a new issue. I spent some more time on the idea of packaging a part of the code in the kent repository (for licensing questions I'll open a new thread) for Debian. Together with John Marshall from htslib upstream I developed a patch which enables building blat and isPcr sucessfully with the Debian packaged htslib (currently version 1.10.2).
Feel free to take over this patch in your code.
Kind regards, Andreas.

Error compiling Source Tree

Dear all,

i want to use the BigWig functionality in ensembles VEP software and therefore I need to install Bio::DB::BigFile and this requires the compiled jkweb.a library from Jim Kent's source tree.

I am following your README to compile and always get the error that openssl/sha.h could not be found.

udc.c:39:25: fatal error: openssl/sha.h: Datei oder Verzeichnis nicht gefunden compilation terminated. ../inc/common.mk:419: die Regel für Ziel „udc.o“ scheiterte make: *** [udc.o] Fehler 1 administrator@k-hg-srv1:/media/Berechnungen/newVEP/kent-335_base/kent-335_base/src/lib$ make > make.err udc.c:39:25: fatal error: openssl/sha.h: Datei oder Verzeichnis nicht gefunden compilation terminated. make: *** [udc.o] Fehler 1

Can you please tell me what I am doing wrong and what I can do to solve this issue.

Thanks for your help in advance.
Stefan

hgTrackDb bug: segfault when compiled with gcc-7

This bug was likely introduced in commit 0689a6e
(recently updated my machines from version before this commit; before the update everything seemed to work fine with both gcc-5 and gcc-7)

When compiled with gcc-5, everything works fine
When compiled with gcc-7, hgTrackDb segfaults; in my case, the command that fails is hgTrackDb -release=alpha magCap magCapA5 trackDb /kent/kent/src/hg/lib/trackDb.sql .

valgrind trace:
==15332== Invalid read of size 8
==15332== at 0x153F25: trackDbLocalSetting (trackDbCustom.c:477)
==15332== by 0x1541F3: trackDbSetting (trackDbCustom.c:809)
==15332== by 0x154222: trackDbFieldsFromSettings (trackDbCustom.c:161)
==15332== by 0x1143D7: polishSupers (hgTrackDb.c:697)
==15332== by 0x1143D7: buildTrackDb (hgTrackDb.c:744)
==15332== by 0x1143D7: hgTrackDb (hgTrackDb.c:794)
==15332== by 0x11504F: main (hgTrackDb.c:910)
==15332== Address 0x81 is not stack'd, malloc'd or (recently) free'd

(I would say, something improper is passed in place of the tdb pointer somewhere)

Long chromosomes are not supported

Dear authors,

I work with the axolotl genome (32Gb in 14 chromosomes, the longest is 3.09Gb). Recently, I wanted to add the GenePred annotation track, but it failed. The generated GenePred file contained wrong coordinates (see example below).

GTF file

chr7q ambMex60DD transcript 1079057453 1079243825 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1"; gene_name "LOC102367929 [nr]|ZFP2 [hs]"; homolog "XP_025051492.1";
chr7q ambMex60DD transcript 1079057453 1079243825 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1"; gene_name "LOC102367929 [nr]|ZFP2 [hs]"; homolog "XP_025051492.1";
chr7q ambMex60DD exon 1079057453 1079060359 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1"; exon_number "1";
chr7q ambMex60DD exon 1079092841 1079093048 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1"; exon_number "2";
chr7q ambMex60DD exon 1079123257 1079123388 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1"; exon_number "3";
chr7q ambMex60DD exon 1079182004 1079182208 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1"; exon_number "4";
chr7q ambMex60DD exon 1079215424 1079215541 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1"; exon_number "5";
chr7q ambMex60DD exon 1079243317 1079243825 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1"; exon_number "6";
chr7q ambMex60DD CDS 1079059167 1079060359 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1";
chr7q ambMex60DD CDS 1079092841 1079093048 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1";
chr7q ambMex60DD CDS 1079123257 1079123388 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1";
chr7q ambMex60DD CDS 1079182004 1079182208 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1";
chr7q ambMex60DD CDS 1079215424 1079215512 1000 - . gene_id "AMEX60DD050188"; transcript_id "LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1";

GenePred file

LOC102367929 [nr]|ZFP2 [hs]|AMEX60DD201050188.1 chr7q - 1079057452 1079243825 1073741823 1079215512 6 1079057452,1079092840,1079123256,1079182003,1079215423,1079243316, 1079060359,1079093048,1079123388,1079182208,1079215541,1079243825,

The number 1073741823 appears in many entries that fail to be converted and loaded by hgLoadGenePred. However, this number is 0x3fffffff, which is defined as a constant BIGNUM throughout the source code.. Therefore, the coordinates are initialized with this value, but for large enough values, the condition if (start > line->start) is never satisfied. That's why this value stays unchanged. Moreover, int is not enough to hold the longest coordinates...

static void getGroupBoundaries(struct gffGroup *group)
/* Fill in start, end, strand of group from lines. */
{
struct gffLine *line;
long start = 0x3fffffff;
long end = -start;
line = group->lineList;
group->strand = line->strand;
for (; line != NULL; line = line->next)
 {
 if (start > line->start)
 start = line->start;
 if (end < line->end)
 end = line->end;
 }
group->start = start;
group->end = end;
}
```

Is this something that can be fixed easily? I saw that this constant appears at multiple positions in the source code, therefore, it may be a git tricky.
I managed to fix the issue for `hgLoadGenePred` and will submit a pull request. However, although I tried to change as few things as possible, i don't know if the proposed changes may break the code in another tool..

But it would be cool to have a fixed version of the utils especially since our installation of the genome browser supports long chromosomes as such.

thanks a lot!
Sergej

Feature Request: REST API Support for selecting track Data using name identifiers

Hello,
Is it possible to support a custom selection of track data using name identifiers filter?
somewhat like /getData/track?genome=hg38;track=gold;names=AC008953.7,AL671879.2,KF458873.1

The purpose for making this request is rtracklayer provides an interface to query the UCSC table browser and it is in the process of migration to rely on the UCSC REST API.

Currently, for filtering name identifiers, whole track data is transferred and search on the client-side which is very costly on the network.I think it would be far better if the name filter is applied on the server-side.

compilation error with lib/uuid.c

Hi,

I am trying to re-compile the latest stable version, but am encountering the following error when making the libraries:

gcc -O -g  -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_GNU_SOURCE -DMACHTYPE_x86_64   -Wall -Wformat -Wimplicit -Wreturn-type -Wuninitialized -I../inc -I../../inc -I../../../inc -I../../../../inc -I../../../../../inc -I../htslib -I/include -I/n/home01/lassance/include/libpng15  -o uuid.o -c uuid.c
In file included from uuid.c:9:0:
../inc/uuid.h:8:23: fatal error: uuid/uuid.h: No such file or directory
 #include <uuid/uuid.h>
                       ^
compilation terminated.
make: *** [uuid.o] Error 1

I am using gcc version 4.8.2 & a Linux system (x86_64-redhat-linux-gnu).

Thanks!

Fasta sequence name validation

Dear UCSC,

Based on my reading (https://genome.ucsc.edu/FAQ/FAQformat.html#format18), it seems that fasta sequence name has no specific requirements. I'm wondering why validationFiles would like to restrict fasta sequence name with fastq standard:

https://github.com/ucscGenomeBrowser/kent/blob/master/src/hg/encode3/validateFiles/validateFiles.c#L1136
https://github.com/ucscGenomeBrowser/kent/blob/master/src/hg/encode3/validateFiles/validateFiles.c#L455
https://github.com/ucscGenomeBrowser/kent/blob/master/src/hg/encode3/validateFiles/validateFiles.c#L258

This standard actually won't allow | which is used a lot by NCBI. Can this be updated?

bigWigSummary produces needLargeMem errors on many bigwigs

On certain files, it happens specifically when querying chr8 with end close to the chromosome size and a large number of bins (> 100,000).

$ bigWigSummary ENCFF856LYZ.bigWig chr8 0 145138636 120000 > /dev/null
needLargeMem: trying to allocate 18446744069429595380 bytes (limit: 17179869184)

The attempted allocation is clearly not reasonable.

I'm running into this same issue with the following files from encode:

https://www.encodeproject.org/files/ENCFF856LYZ/@@download/ENCFF856LYZ.bigWig
https://www.encodeproject.org/files/ENCFF992JJW/@@download/ENCFF992JJW.bigWig
https://www.encodeproject.org/files/ENCFF928WEU/@@download/ENCFF928WEU.bigWig
https://www.encodeproject.org/files/ENCFF723XZS/@@download/ENCFF723XZS.bigWig
https://www.encodeproject.org/files/ENCFF828IPR/@@download/ENCFF828IPR.bigWig
https://www.encodeproject.org/files/ENCFF613CYH/@@download/ENCFF613CYH.bigWig
https://www.encodeproject.org/files/ENCFF917YSR/@@download/ENCFF917YSR.bigWig
https://www.encodeproject.org/files/ENCFF676GTP/@@download/ENCFF676GTP.bigWig
https://www.encodeproject.org/files/ENCFF353YGE/@@download/ENCFF353YGE.bigWig
https://www.encodeproject.org/files/ENCFF724KWV/@@download/ENCFF724KWV.bigWig
https://www.encodeproject.org/files/ENCFF153FDP/@@download/ENCFF153FDP.bigWig
https://www.encodeproject.org/files/ENCFF359QVU/@@download/ENCFF359QVU.bigWig
https://www.encodeproject.org/files/ENCFF367WTF/@@download/ENCFF367WTF.bigWig
https://www.encodeproject.org/files/ENCFF562RHH/@@download/ENCFF562RHH.bigWig
https://www.encodeproject.org/files/ENCFF569CSW/@@download/ENCFF569CSW.bigWig
https://www.encodeproject.org/files/ENCFF876DXW/@@download/ENCFF876DXW.bigWig
https://www.encodeproject.org/files/ENCFF434YEG/@@download/ENCFF434YEG.bigWig
https://www.encodeproject.org/files/ENCFF232VFZ/@@download/ENCFF232VFZ.bigWig
https://www.encodeproject.org/files/ENCFF676UXN/@@download/ENCFF676UXN.bigWig
https://www.encodeproject.org/files/ENCFF700YOH/@@download/ENCFF700YOH.bigWig
https://www.encodeproject.org/files/ENCFF629RRF/@@download/ENCFF629RRF.bigWig
https://www.encodeproject.org/files/ENCFF791ZIC/@@download/ENCFF791ZIC.bigWig
https://www.encodeproject.org/files/ENCFF038IYA/@@download/ENCFF038IYA.bigWig

Gene tracks (UCSC Genes & NCBI RefSeq) show aminoacid sequence as generated from the genome, not from the actual transcript CDS.

Hi there,

As part of a project between LOVD and VariantValidator, I've been reviewing transcript mappings of transcripts where mismatches occur between the genome and the transcript sequence. Of course, these are hard to map to the genome, and different tools use different methods for this mapping. I was using the UCSC genome browser's mapping to verify results but I noticed sometimes the amino acids reported in the gene tracks do not actually match the transcript's CDS, but instead, seem to be translations from the genome sequence.

Some examples:
NM_015120.4 (NCBI RefSeq status: Reviewed)
NC_000002.11:g.73677655A>G
Both gene tracks map this variant to NM_015120.4:c.3998A>G / p.(Tyr1333Cys), but position 1333 of the CDS of NM_015120.4 does not contain a Tyr (Y), it contains a Thr (T).
Variant Validator maps this variant to NM_015120.4:c.4004A>G / p.(Tyr1335Cys), which does match the CDS.

NM_001145026 (versions 1 and 2) (NCBI RefSeq status: Reviewed)
NC_000012.11:g.80878310C>T
Both gene tracks map this variant to different versions of NM_001145026:c.829C>T / p.(Gln277Ter), but positions 277 of the CDS of both NM_001145026.1 and NM_001145026.2 do not contain a Gln (Q), they contain a Thr (T).
Variant Validator maps this variant to NM_001145026.1:c.1285C>T / p.(Gln429Ter), which does match the CDS.

NM_017848 (versions 4 and 6) (NCBI RefSeq status: Reviewed)
NC_000023.10:g.54209388T>G
Both gene tracks map this variant to different versions of NM_017848:c.244A>C, both resulting in p.(Ile82Leu), but position 82 of the CDS of NM_017848.6 does not contain a Ile (I), but a Thr (T). The CDS of NM_017848.4 does contain an Ile there. This difference is not shown in the genome browser.

I argue that the aminoacids shown in the track should represent the true CDS of the given transcripts.

Thank you,
Ivo Fokkema
LOVD project

CC: @maximilianh, @AngieHinrichs (UCSC), @PeteCausey-Freeman (Variant Validator)

bedGraphToBigWig: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

Note that we have our own ticket system and we use Github issues only for source code related issues or pull requests. If you have bug reports, contacting us via the public mailing list [email protected] may be easier. See also https://genome.ucsc.edu/contacts.html for ways to search the mailing lists or contact us directly (not public) via email.

Describe the bug
Hello UCSC,
I'm trying to convert bedGraph to bigWig (which is generated by MACS2). I followed this tutorial example #3 (https://genome.ucsc.edu/goldenpath/help/bigWig.html#:~:text=When%20converting%20a%20bedGraph%20file,that%20it%20contains%20only%20data).
However, I met errors bedGraphToBigWig: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory.
I tried below coding, but openssl is upgraded to v1.1.1f and error was not solved. I found that Unbuntu20.04 has droped the installation of openssl v1.0.0.
Appreciate it if you could help me with this issue.
Thanks!
Best,
YJ

sudo apt-get update -y
sudo apt-get install -y libssl-dev
sudo ln -s libssl.so.1.1.1f libssl.so.1.0.0
sudo ln -s libcrypto.so.1.1.1f libcrypto.so.1.0.0

To Reproduce
Steps to reproduce the behavior:

conda install -c bioconda ucsc-bedclip
conda install -c bioconda ucsc-bedgraphtobigwig
bedtools slop -i /home/hyjforesight/MACS3_outputs/Cracd_KO1_treat_pileup.bdg -g /home/hyjforesight/mm10_chromInfo.txt -b 0 | bedClip stdin /home/hyjforesight/mm10_chromInfo.txt /home/hyjforesight/bigWig/Cracd_KO1_treat_pileup_clip.bdg    # clip bedGraph generated by MACS2
LC_ALL=C sort -k1,1 -k2,2n /home/hyjforesight/bigWig/Cracd_KO1_treat_pileup_clip.bdg > /home/hyjforesight/bigWig/Cracd_KO1_treat_pileup_clip_sorted.bdg    # sort
bedGraphToBigWig /home/hyjforesight/bigWig/Cracd_KO1_treat_pileup_clip_sorted.bdg /home/hyjforesight/mm10_chromInfo.txt /home/hyjforesight/bigWig/Cracd_KO1_treat_pileup_clip_sorted.bw
bedGraphToBigWig: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

Expected behavior
should generate bigWig smoothly

Screenshots
image

Desktop (please complete the following information):

  • OS: Unbuntu 20.04
  • Browser [e.g. firefox, chrome, safari]
    - Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.