dpwe / audfprint Goto Github PK
View Code? Open in Web Editor NEWLandmark-based audio fingerprinting
License: MIT License
Landmark-based audio fingerprinting
License: MIT License
Hi @dpwe,
I'm investigating on how the self.table in hash_table.py works when inserting and querying data. I found some confusion in this issue, please see my notes below:
# In hash_table.py, function get_hits():
# hashes = parameter query hash returned from wavfile2hashes (list of (time, hash) tuples)
# hash_ is taken straight from hashes, means the hash value is NOT masked
hash_ = hashes[ix][1]
# We check how many data contained in the self.counts for the given hash_
# hash_ is still NOT masked which is correct because self.counts stores unmasked hashes
nids = min(self.depth, self.counts[hash_])
# This is where it gets confusing, because we are trying to get the table values using the unmasked hash
tabvals = self.table[hash_, :nids]
# But the implementation of storing data into self.table in hash_table.py function store() is like this:
# sortedpairs = time-hash pairs ingested from sound file, returned from wavfile2hashes (list of (time, hash) tuples)
for time_, hash_ in sortedpairs:
# This is still OK because we are getting the counts using the original hash (not masked)
count = self.counts[hash_]
time_ &= timemask
# This is the point where the hash_ variable is replaced with the masked version of hash_
hash_ &= hashmask
val = (idval + time_) #.astype(np.uint32)
if count < self.depth:
# This is where we store into the self.table USING the HASHED VERSION of hash_, which is contradict the previous retrieval process of self.table
self.table[hash_, count] = val
TL;DR; is, the way we store values into the hash table is using the masked hash as the key, and then we get the values from the hash table using the original / unmasked hash as the key. Can you kindly explain the situation please, thanks!
I've been using audfprint for doing peak spectral peak analysis in order to understand how/what Shazam et al see as "peaks" within a given track. So I've mostly been using the precompute and -K options to produce peak files and then extract the location/frequency pairs from the afpk file to use elsewhere.
I'm noticing that, especially at relatively low densities (say, 10 hashes/sec), I can end up with no peaks detected for long stretches. For example, on a test track I'm using I'll end up without any peaks in a stretch of 4 seconds or more. This is for sonic material with perceptible activity.
So I have two questions about this.
First, is there any way to force a higher sensitivity without requiring an overall higher density? In other words, in the areas with lots of peaks (at a density of 10 hashes/sec), I don't need any more in the dense areas, but I would appreciate more peak detection in the lulls. Can I ask for a minimum hashes/sec?
Second, does Shazam et al ever left that long of a stretch go w/o recorded peaks? My intuition playing with it is no, but I was surprised to encounter such low peak detection.
1, In audfprint.py line 358, it should be
-v, --verbose Verbosity level
2, In audfprint_match.py line 183, it should be
results.resize((maxnresults, 7))
Could you please depict how this landmarks based fingerprint search works in your implementation? Also, assumed I want to embed the audio representation in a different way (let's say spectrum or correlogram instead of fingerprint), would the landmark based representation work?
Thank you.
Hi,
I have a few questions regarding scaling:
Considerng current implementation of HashTable, which holds 1,048,576 unique hashes with each hash holds 100 entries, you've explained that this configuration is able to store more than 100k song tracks. If I have millions of song tracks, would it lessen the accuracy of detection against all the millions of song tracks? Because I understand that when the bucket is filled (over 100 entries) for a specific hash, that specific hash's bucket it gets randomly replaced in a random position of the entries between 1 to 100 (replaced by new song / track id).
If number 1 is correct, if I increase the number of unique hashes plus increasing the bucket entries for each hash, will it allow better detection against millions of song / tracks? I think it will have tradeoff with performance / speed.
Thanks.
Add an option to calculate and report the time support within the query of the matching hashes i.e. the earliest and latest times of found matching hashes. (This also gives the time support within the reference when added to the skew time). Useful for organizing multiple matching excerpts in a single query, even if they come from arbitrary places within ref items.
Any chance we could turn this in to a package and publish it to pypi?
when matching time ranges in https://github.com/dpwe/audfprint/blob/master/audfprint_match.py#L178 wouldn't it make sense to filter based on id as well? Something like this:
match_times = hits[np.logical_and.reduce([ hits[:, 1] >= minoffset, hits[:, 1] <= maxoffset, hits[:, 0] == id ]), 3]
I was doing some test with the code.
The only difference is that now the query is 1s. To make it work, I increase the landmark density and change the target zone to a small one(63bins and 15 symbols). Even without noise and degradation, the recognition rate is just 92%. Is this expected? As the code is designed mainly for query 5sec at least?
audfprint appears to be Linux focused.
import os,platform,psutil def process_info(): rss=usrtime=0 p=psutil.Process(os.getpid()) if platform.system().lower()=='windows': rss=p.memory_info()[0] usrtime=p.cpu_times()[0] elif platform.system().lower()=='linux': rss=p.get_memory_info()[0] usrtime=p.get_cpu_times()[0] return rss,usrtime if __name__ == "__main__": print process_info()
Hello,
How can I get/calculate file duration from database? Can I store file duration while adding to database?
I want to query long mixed/combined audio file. Original files are in database. Is it possible already?
ps: I noticed that when I re-add files to database hash size doubles but number of files same as before.
Hello,
Is it possible to search in hashes which track_id or filename is known? not from whole database? If possible where can I begin? I am new to python and don't understand the codes very well. I need some help please.
I believe that if database is huge (1000 s of tracks) and lot of query takes lot of resources and time, right? This filtering helps a lot? or not?
thanks for your code.
but I couldn't match the query.mp3 in tests/data/query.mp3
I test as follows:
python3 audfprint.py new --dbase fpdbase.pklz tests/data/Nine_Lives/0*.mp3
python3 audfprint.py add --dbase fpdbase.pklz tests/data/Nine_Lives/1*.mp3
python3 audfprint.py match --dbase fpdbase.pklz tests/data/query.mp3
Then get the result:
NOMATCH tests/data/query.mp3 5.6 sec 267 raw hashes
so i didn't know why the result is not same as in README? Thank you very much, wish to your replay.
I may be using the parameters incorrectly, but the output isn't behaving the way I would expect
I have a folder for my media /var/audfprint/media_cache
, and folder for fingerprints /var/audfprint/afpt_cache
I'm passing them in using --precompdir and -wavdir as appropriate. I would expect a fingerprint for a given media file in the media cache to appear directly in the afpt cache. However, the resulting file is nested inside of a directory path in the afpt cache.
It would be very helpful to be able to specify that I don't want audfprint to recreate my directory structure in the fingerprint directory.
Example Input:
$> python audfprint.py precompute --precompdir /var/audfprint/afpt_cache --wavdir /var/audfprint/media_cache media-1.mp3
Output:
> Thu Dec 31 18:29:05 2015 precomputing hashes for /var/audfprint/media_cache/media-1.mp3 ...
> wrote /var/audfprint/afpt_cache/var/audfprint/media_cache/media-1.afpt ( 49183 hashes, 1859.936 sec)
Desired Output:
> Thu Dec 31 18:29:05 2015 precomputing hashes for /var/audfprint/media_cache/media-1.mp3 ...
> wrote /var/audfprint/afpt_cache/media-1.afpt ( 49183 hashes, 1859.936 sec)
Apologies for the sparsely written issue, I can add more details after an impending deadline!
I noticed that when I removed the only hash from a database I get a division by zero error at the line where the percentage of dropped hashes.
Steps to reproduce:
Hi!
I'm crating the database out of 10k files. My question is, is it possible? I've seen that while processing the files this message appears:
Read fprints for 2023 files ( 8013111 hashes) from fpdbase.pklz (1.09% dropped)
Dropped ones is getting higher and number of files "2023" doesn't increase.
Should I create smaller databases?
Thanks!
Did someone here test accuracy of this library when compared to dejavu audio fingerprinting. Did the author introduced any new concept to improve the accuracy?
Hi,
Is is the any way to list the items currently in a database?
thanks
Hi,
I'm looking into delegating some of the scalability issues to a known database, for now MySQL.
I can read fingerprints (with your code) and store them in MySQL (using some dejavu db code); and I can read hash matches back:
https://gist.github.com/Laurian/7869355a000c803f26bb434935a367cb#file-test-py
I'm struggling with how to feed those hash matches back into your further processing as I don't quite follow the magic around hashmask
, timemask
and some of the numpy operations you do.
How would you recommend approaching alternate hashtable implementations?
I am precomputing the pklz files for a mixture of wav and mp3 files that make up a collection of 68 large files. Then I am seeing whether 1249 snippets appear in one of these 68 larger files.
Interesting, i always find that the last value is wrong and the same as the penultimate value, as if there is an algorithmic error.
Example:
And the Penultimate and last value show the same start times also:
Matched precomp/source/aud/20100216_LeadersQuestions.afpt 409.9 sec 14841 raw hashes as segmented/aud/Po01PtFG_20100216_st_0019.wav at -275.7 s with 590 of 662 common hashes at rank 8
Matched precomp/source/aud/20100216_LeadersQuestions.afpt 409.9 sec 14841 raw hashes as segmented/aud/Po01PtFG_20100216_st_0010.wav at -275.7 s with 585 of 662 common hashes at rank 9
This problem does not arise if I create a much smaller number of .afpt files and send all segmented/aud/Po01PtFG_20100216_st_*.wav into it.
I'm wondering am I running the code with slightly wrong options and this is the cause of the problem? If I run with density = 100 I get a few snippets of duration > 10secs that are not recognised in using the algorithm. If I set density = 50 I get only snippets < 2.5secs that are not recognised which I would be throwing away anyway. If I have much smaller densities I obtain more and more NOMATCH values.
My script for running the software looks something like this:
ls source/aud/.wav source/aud/.mp3 > largeFiles.list
ls segmented/aud/$2 > allSnippets.list
rm -rf precomp
echo "Precompute peaks for all large files..."
./audfprint.py precompute --samplerate 11024 --density 50 --shifts 1 --precompdir precomp --ncores 4 --list largeFiles.list
find precomp/ -name "*.afpt" > precomp.list
echo "Take the snippets and build a database of their peak profiles..."
./audfprint.py new --dbase snippets.db --density 50 --samplerate 11025 --shifts 4 --list allSnippets.list
echo "Lastly, find matches and arrange on screen to make the information easier to read..."
./audfprint.py match --dbase snippets.db --match-win 2 --min-count 20 --max-matches 100 --sortbytime --opfile matches.txt --ncores 4 --list precomp.list
echo "Check against known results..."
grep Matched matches.txt | sed -e "s@precomp@@" -e "s@/2/data/@@g" -e "s/.mp3//" -e "s/.wav//" -e "s/.afpt//" | awk '{printf "%s\t%2d:%2.1f\t%s\n",$15,(-$11%3600/60),(-$11%60),$9}' > recognised.txt
Is there a way to concatenate precomputed hashes so that the times align properly?
I have recorded consecutive mp3 files and I precompute them. I would like to concatenate the precomputed files without needing to attach the original mp3 files and precomputing the attached file.
I'm using your code as a library, so if you can share a code sample that would be great.
I've tried offsetting the hash time by 430 for each file I add but it's not exact. even if I get the length of the original file using audio_read it's not exact.
Hi,
I trained 5 audio files to establish a pklz file, however, it is strange to find there are only 4 audio files are matched(with good result), one audio file could not be recognized. It seems that the reason is the mode of this audio file cannot be found. So how to change the parameters to recognize this audio file? And which of the parameters are critical to influence the results?
Thanks.
So I was dabbling with the implementation of the code wherein I was searching for a small audio clip(abc) into a larger recording(xyz).
When I pickle "abc" and the run the match command with "xyz" as the query, I revive perfect results, but the time detected is a total chaos when i do it vice versa; pickle xyz and match abc.
Any idea why?
Thanks a ton!
Does -- maxtime parameter represent the number of samples or seconds?
Hi,
Can this python version be used for detecting / correcting audio alignment as the Matlab version (eg, from https://labrosa.ee.columbia.edu/~dpwe/resources/matlab/audfprint/#3 ) ? If so, any pointer on how to proceed? Thanks!
Hey, can we use a .pklz file against .pklz for querying(matching).
For example: python3 audfprint.py match --dbase ads.pklz recs.pklz --find-time-range --exact-count --max-matches 200 --min-count 50 --opfile results.out
on a database of 50k tracks, many of which are almost duplicates (the works of Elvis on many albums), I only got one response back for hounddog and it was not even the one that it was orginally sampled from.
Is there a parameter to retrieve more hits?
-brewster
./audfprint.py
Traceback (most recent call last):
File "./audfprint.py", line 24, in <module>
import audfprint_analyze
File "/audfprint/audfprint_analyze.py", line 26, in <module>
import librosa
File "/usr/local/lib/python2.7/dist-packages/librosa/__init__.py", line 12, in <module>
from . import core
File "/usr/local/lib/python2.7/dist-packages/librosa/core/__init__.py", line 108, in <module>
from .time_frequency import * # pylint: disable=wildcard-import
File "/usr/local/lib/python2.7/dist-packages/librosa/core/time_frequency.py", line 10, in <module>
from ..util.exceptions import ParameterError
File "/usr/local/lib/python2.7/dist-packages/librosa/util/__init__.py", line 70, in <module>
from . import decorators
File "/usr/local/lib/python2.7/dist-packages/librosa/util/decorators.py", line 67, in <module>
from numba.decorators import jit as optional_jit
File "/usr/local/lib/python2.7/dist-packages/numba/__init__.py", line 9, in <module>
from . import config, errors, runtests, types
File "/usr/local/lib/python2.7/dist-packages/numba/config.py", line 11, in <module>
import llvmlite.binding as ll
File "/usr/local/lib/python2.7/dist-packages/llvmlite/binding/__init__.py", line 10, in <module>
from .module import *
File "/usr/local/lib/python2.7/dist-packages/llvmlite/binding/module.py", line 8, in <module>
from .value import ValueRef
File "/usr/local/lib/python2.7/dist-packages/llvmlite/binding/value.py", line 9, in <module>
class Linkage(enum.IntEnum):
AttributeError: 'module' object has no attribute 'IntEnum'
anyone has same issue?
Hello,
I am looking at converting this to Python3. I am looking over the tests and profiling but I can't seem to find any of the associated audio file. Are they available?
Hello dpwe,
The error was occurred when I set --bucketsize 512 or more.
Why max bucket num is less than 511?
Sorry my poor English.
Thanks.
Hey Dan, hope you are doing great.
So I pre-computed some 'recordings' on which I wanted to query a subclip which is contained in all of the said 'recordings'.
In parallel, I also saved the 'recordings' as a single .pkl database and queried the same subclip on it.
Turns out, the first method fails to recognize the subclip in many of the recordings whereas the second method works flawlessly.
Attached below is just one such instance:-
Results by 1st method: NOMATCH precomp/home/ubuntu/mm/audfprint-master/tests/data/ABC001 2018-09-08 16-00-00.afpt 3655.4 sec 377315 raw hashes
Results by 2nd method: Matched 46.6 s starting at 935.6 s in ./tests/data/ABC001 2018-09-08 16-00-00.mp3 to time 2.0 s in ./tests/data/adi/clip.mp3 with 1132 of 35696 common hashes at rank 0 count 8
Hope I make the issue clear enough,
Thanks!
Rather than hard-coding to store pairs of peaks, allow for more informative (but more miss-liable) hashes by combining three or more peaks.
Hi DAn,
Currently I'm testing the audfprint matching accuracy for mp3 song samples each with 60 seconds in length, against their own extracted parts of the audio with randomized starting position and varying lengths between 5 and 15 seconds, so that the matched audio is a part of the ingested audio which makes them identical in terms of their waveform. The tests went perfectly with 100% match. But then I've added some noise to into the randomize-extracted parts by merging noise sound with the clean ones. There are 3 type of noises, all of them are Brownian noise with 0.2 amplitude (low noise), 0.5 amplitude (medium) and 0.8 amplitude (high noise). The scale of amplitude noises is between 0 and 1. This test resulted in various results, but the bottom line is, the higher the noise, the probability of NOMATCH is increases.
To increase the accuracy against such tests (mixed with noises, even though they are identical), by your suggestion, what parameters should I tune? I'm afraid if I blindly test changing each of them, I may actually not know if I hit the right spot or not. I know there's a lot of them (--density, --pks-per-frame, etc), I also try to understand what each of them actually do, I'm getting there, but still needs to learn a lot :D. Thanks.
Hi,
I'm trying to merge tables with different number of hashbits. When I match against the original table it detects songs correctly, but when I match against the table that was merged with the original table it doesn't detect songs.
I don't really understand your implementation of the hash table, so if this operation doesn't make sense, please let me know.
I'm doing this so that I can keep the size of the original hash table small. I'm storing many small pickled files in a database and merging those files into a hash table in memory when needed.
Hey, you have made a great algorithm. I have one need that is to eliminate the ffmpeg dependency. Is there a way i can avoid using that package or use an alternate library which does not use ffmpeg in the background. Thanks !!
At the moment audfprint uses the filename/path as identifier for the songs.
It would be great to have an option to pass other values to audfprint. Maybe by an additional parameter.
I plan to use audfprint together with data from musicbrainz.org and would like to store those IDs for the ingested songs.
I tested the samplerate for ingest of content and query values as follows. Note Case 1 is correct. All source data is original 8KHZ.
--exact-count
--min-count 1
--density 100
--max-matches 10
--match-win 5
--pks-per-frame 2
Case1:--samplerate 11000, first case is the query itself in the content, second instance is a match, 62.1 seconds is another instance, All detections in case 1 are correct.
at 46.476 s with 2 of 9 hashes at rank 1 at 47.686 s with 1 of 9 hashes at rank 1 at 62.183 s with 1 of 9 hashes at rank 1
Case 2: Here we have a match with itself, however it misses matches at 47.6 and 62 in the top 10 results.
With 8KHZ data --samplerate 8000
at 46.048 s with 3 of 13 hashes at rank 1 at 332.416 s with 2 of 13 hashes at rank 1 at 12.672 s with 1 of 13 hashes at rank 1 at 13.472 s with 1 of 13 hashes at rank 1 at 97.408 s with 1 of 13 hashes at rank 1 at 121.056 s with 1 of 13 hashes at rank 1 at 146.304 s with 1 of 13 hashes at rank 1 at 257.216 s with 1 of 13 hashes at rank 1 at 323.008 s with 1 of 13 hashes at rank 1
machine info : i3 2nd series 4 cores, 8gb ram, 256 gb ssd, xubuntu 16.04
query rec_170126_080002.afpt or orignal file is FM radio recording.
fps : 23348 files ( 2022959318 hashes) from /afp/songs.db
database created using : adsa = ['audfprint.py', 'new', '--dbase', songs_db_file, '--density', '100', '--skip-existing', '--shifts', '6', '--maxtime', '32768', '--ncores', '4', '--list', tmpFile]
-- args.
when --ncores 4 then no error like below, but freezes... without --ncores 4 or single core then following error occurs.
What configuration do you suggest? Or what can we do?
(Line numbers may not same as original file, I added custom lines such as song duration table .. etc.)
./bin/audfprint.py match --dbase /afp/songs.db --match-win 2 --min-count 200 --max-matches 100 --sortbytime --opfile /afp/runs/tmp/songs_0126_141018.rec --find-time-range --list /afp/runs/tmp/run_0126_141018.rec
Read fprints for 23348 files ( 2022959318 hashes) from /afp/songs.db
Thu Jan 26 14:55:58 2017 Analyzed #0 /afp/precomp/1701/26/rec/rec_170126_080002.afpt of 3401.259 s to 298482 hashes
Traceback (most recent call last):
File "./bin/audfprint.py", line 490, in <module>
main(sys.argv)
File "./bin/audfprint.py", line 473, in main
strip_prefix=args['--wavdir'])
File "./bin/audfprint.py", line 161, in do_cmd
msgs = matcher.file_match_to_msgs(analyzer, hash_tab, filename, num)
File "/afp/bin/audfprint_match.py", line 335, in file_match_to_msgs
rslts, dur, nhash = self.match_file(analyzer, ht, qry, number)
File "/afp/bin/audfprint_match.py", line 326, in match_file
rslts = self.match_hashes(ht, q_hashes)
File "/afp/bin/audfprint_match.py", line 281, in match_hashes
results = self._approx_match_counts(hits, bestids, rawcounts)
File "/afp/bin/audfprint_match.py", line 234, in _approx_match_counts
allbincounts = np.bincount((allids << timebits) + alltimes)
MemoryError
Currently, Matcher.match_hashes only examines the top 100 most promising ref items to calculate the modal time skews and time-filtered hash counts. For larger databases, this may not be deep enough to find true matches. In any case, hard-coding a parameter like this is very poor style.
Hi, having issues with recognising a clip using this script.
This is the command I used:
python audfprint.py match --dbase db1.pklz SS9-18.wav
And this is the traceback I get:
Read fprints for 4864 files ( 25037164 hashes) from db1.pklz (2.75% dropped)
Traceback (most recent call last):
File "audfprint.py", line 490, in
main(sys.argv)
File "audfprint.py", line 473, in main
strip_prefix=args['--wavdir'])
File "audfprint.py", line 156, in do_cmd
msgs = matcher.file_match_to_msgs(analyzer, hash_tab, filename, num)
File "/home/ben/audio_recognition-master/bin/audfprint/audfprint_match.py", line 379, in file_match_to_msgs
rslts, dur, nhash = self.match_file(analyzer, ht, qry, number)
File "/home/ben/audio_recognition-master/bin/audfprint/audfprint_match.py", line 355, in match_file
q_hashes = analyzer.wavfile2hashes(filename)
File "/home/ben/audio_recognition-master/bin/audfprint/audfprint_analyze.py", line 407, in wavfile2hashes
self.peaks2landmarks(peaklist)))
File "/home/ben/audio_recognition-master/bin/audfprint/audfprint_analyze.py", line 89, in landmarks2hashes
hashes[:, 0] = landmarks[:, 0]
IndexError: too many indices for array
Time is of the essence here unfortunately, and any help would be appreciated ๐
There is a possibility this is one of the disadvantages of the algorithm used in audfprint, but is there a way to improve the match % in songs where the pitch/tempo was changed? (via the Options?)
According to what I tested (using Ableton) if you change pitch >-/+1% songs start to be not matchable. Same goes to changing tempo only (preserving the key) with >-/+ 5%.
-Lior
Hi, professor Dan
Shouldn't the _caculate_time_ranges() function modified as:
def _calculate_time_ranges(self, hits, id, mode, mintime):
"""Given the id and mode, return the actual time support."""
match_times = sorted(hits[row, 3]
for row in np.nonzero(hits[:, 0]==id)[0]
if mode + mintime - self.window <= hits[row, 1]
and hits[row, 1] <= mode + mintime + self.window)
and execute: min_time, max_time = self._calculate_time_ranges(hits, id, mode, mintime) in _approx_match_counts().
since (mode+mintime) physically indicates real time_skew.
When I upped the machine to 3GB of virtual memory, it worked fine. I tried using the --addcheckpoint to see if I could write it out before it got to large, but it did not seem to be a recognized command line option in the newest release.
The real time to search 50k tracks with a 10 second clip when in a docker container (rajbot/audfprint) was 11seconds.
-brewster
Hi, if I were to import this library into a larger Python script, how would I go about running the equivalent of the following commands:
python audfprint.py match -x 10 --dbase test.pklz path/to/possible/match.wav
python audfprint.py add -x 10 --dbase test.pklz path/to/new/file.wav
I did not see anything in the documentation about how to use the program within a Python script.
By the way, thanks for writing such a useful program.
Hi,
To see if I had installed correctly. I ran "new" and then "match" on the same sample mp3 file. However match ran for a few minutes and then "Terminated".
I can't tell if that error matters.
Would appreciate any help. Apologies if I am doing something wrong.
Is there a post-install reference check I can do to see if my install is ok?
python audfprint.py new -d animals2 references/280.mp3
/usr/local/lib/python2.7/dist-packages/librosa/core/audio.py:33: UserWarning: Could not import scikits.samplerate. Falling back to scipy.signal
warnings.warn('Could not import scikits.samplerate. '
Thu Oct 15 07:42:13 2015 ingesting #0: references/280.mp3 ...
Added 162 hashes (16.7 hashes/sec)
Processed 1 files (9.7 s total dur) in 8.9 s sec = 0.913 x RT
Saved fprints for 1 files ( 162 hashes) to animals2
Dropped hashes= 0 (0.00%)
root@ubuntu:~/freetype-2.5.3/audfprint-master# python audfprint.py match -d animals2 references/280.mp3
/usr/local/lib/python2.7/dist-packages/librosa/core/audio.py:33: UserWarning: Could not import scikits.samplerate. Falling back to scipy.signal
warnings.warn('Could not import scikits.samplerate. '
Thu Oct 15 07:43:08 2015 Reading hash table animals2
Terminated
The software seems to take over my whole Debian Jessie dual quad core machine (Intel i7) when performing a pre compute on a 24 hour video obtained from http://oireachtasdebates.oireachtas.ie/ with ncores=1. The files are about 4-6.4Gb in size but should they be fully loaded at the time of processing? Or does the algorithm require full loading of the video to jump around within it? I'm guessing that the full RAM and swap are depleted. free under Linux shows that I have 6.4Gb used and 1.6Gb so it seems to be proportional to the size of the file being processed. The load average on my machine is over 9!!! Can I add an option to reduce this problem? I know I can use avconv to split the videos into 1hr segments but it's not a great workaround.
Also some process in 2 minutes whilst others take 150 minutes.
I'm using density=50.
Hi,
This tool works very well.
I have wrapped it in a node server and it works like a charm.
However, when a request is sent to my server, the server calcul the fingerprint from the file, and the CPU is used at 100%.
For my project, it's not viable. I was wondering if that could be possible to calcul the fingerprint from a Smartphone in Java (Android) and Swift (IOS), because I guess that if the CPU is used ร 100%, it's mainly because of the fingerprint's calcul ?
Thanks
Dan Schultz reports:
I wanted to let you know I'm seeing an error from audfprint:
Traceback (most recent call last):
File "/usr/local/bin/audfprint/audfprint.py", line 482, in
main(sys.argv)
File "/usr/local/bin/audfprint/audfprint.py", line 465, in main
strip_prefix=args['--wavdir'])
File "/usr/local/bin/audfprint/audfprint.py", line 155, in do_cmd
msgs = matcher.file_match_to_msgs(analyzer, hash_tab, filename, num)
File "/usr/local/bin/audfprint/audfprint_match.py", line 326, in file_match_to_msgs
rslts, dur, nhash = self.match_file(analyzer, ht, qry, number)
File "/usr/local/bin/audfprint/audfprint_match.py", line 317, in match_file
rslts = self.match_hashes(ht, q_hashes)
File "/usr/local/bin/audfprint/audfprint_match.py", line 272, in match_hashes
results = self._approx_match_counts(hits, bestids, rawcounts)
File "/usr/local/bin/audfprint/audfprint_match.py", line 228, in _approx_match_counts
allbincounts = np.bincount((allids << timebits) + alltimes)
ValueError: The first argument of bincount must be non-negative
Currently, peaks are paired with peaks in the allowable time/frequency range on a first-come, first-served basis until the fan out is exhausted. To match more uniformly between samples with widely differing amounts of clutter (or even different densities), the fan out should be a uniform random sample from the peaks in the range.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.