Comments (6)
The system is designed for reference items of a few minutes and queries of
tens of seconds. It will work with files an hour long, but beyond that I
don't know. Can you break your files into 1 hour chunks?
DAn.
On Friday, January 29, 2016, eamonnkenny [email protected] wrote:
The software seems to take over my whole Debian Jessie dual quad core
machine (Intel i7) when performing a pre compute on a 24 hour video
obtained from http://oireachtasdebates.oireachtas.ie/ with ncores=1 files
are about 4-6.4Gb in size but should they be fully loaded at the time of
processing? Or does the algorithm require full loading of the video to jump
around within it?Also some process in 2 minutes whilst others take 150 minutes.
I'm using density=50.
—
Reply to this email directly or view it on GitHub
#18.
from audfprint.
Thanks Dan,
1hr chunks is probably achievable and I can throw away the video for the
analysis and then glean it back at the end using avconv and mediainfo.
Basically, I'm looking for a parliamentary talk in a 24hr session. To
find it I split each 24hr into 1hr sessions, then check the snippets to
find their start time. If then exist in say Dail_20060208-11.wav at time
135.2 seconds then I just use: 3600 * 11 + 135.2 to get the state time
and pymediainfo will give the duration of any snippet file using:
duration = float( pymediainfo.MediaInfo.parse( snippetFile
).tracks[0].duration ) / 1000.0
I'll see how I get on. Thanks for your help.
On 29/01/16 11:53, Dan Ellis wrote:
The system is designed for reference items of a few minutes and queries of
tens of seconds. It will work with files an hour long, but beyond that I
don't know. Can you break your files into 1 hour chunks?DAn.
On Friday, January 29, 2016, eamonnkenny [email protected] wrote:
The software seems to take over my whole Debian Jessie dual quad core
machine (Intel i7) when performing a pre compute on a 24 hour video
obtained from http://oireachtasdebates.oireachtas.ie/ with ncores=1
files
are about 4-6.4Gb in size but should they be fully loaded at the time of
processing? Or does the algorithm require full loading of the video
to jump
around within it?Also some process in 2 minutes whilst others take 150 minutes.
I'm using density=50.
—
Reply to this email directly or view it on GitHub
#18.—
Reply to this email directly or view it on GitHub
#18 (comment).
Best Regards,
Eamonn Kenny
*Dr. Eamonn Kenny* | Research Fellow
ADAPT Centre
O'Reilly Building Phone: +353 (0) 1 896 1335
Trinity College Dublin Mobile: +353 (0) 86 309 3967
Dublin 2 E-mail: [email protected]
Ireland www.adaptcentre.ie http://www.adaptcentre.ie/
https://twitter.com/adaptcentrehttps://www.facebook.com/ADAPTCentre?fref=tshttps://www.youtube.com/channel/UC9--qVutTtyLyhZCJR7rY5ghttps://www.linkedin.com/company/adapt-centre
from audfprint.
To get accurate time indices within 1hr chunks you'll need to increase
--maxtime 262144 or something. By default (--maxtime 16384) it aliases on
a 6 minute window.
DAn.
On Fri, Jan 29, 2016 at 9:53 AM, eamonnkenny [email protected]
wrote:
Thanks Dan,
1hr chunks is probably achievable and I can throw away the video for the
analysis and then glean it back at the end using avconv and mediainfo.
Basically, I'm looking for a parliamentary talk in a 24hr session. To
find it I split each 24hr into 1hr sessions, then check the snippets to
find their start time. If then exist in say Dail_20060208-11.wav at time
135.2 seconds then I just use: 3600 * 11 + 135.2 to get the state time
and pymediainfo will give the duration of any snippet file using:duration = float( pymediainfo.MediaInfo.parse( snippetFile
).tracks[0].duration ) / 1000.0I'll see how I get on. Thanks for your help.
On 29/01/16 11:53, Dan Ellis wrote:
The system is designed for reference items of a few minutes and queries
of
tens of seconds. It will work with files an hour long, but beyond that I
don't know. Can you break your files into 1 hour chunks?DAn.
On Friday, January 29, 2016, eamonnkenny [email protected]
wrote:The software seems to take over my whole Debian Jessie dual quad core
machine (Intel i7) when performing a pre compute on a 24 hour video
obtained from http://oireachtasdebates.oireachtas.ie/ with ncores=1
files
are about 4-6.4Gb in size but should they be fully loaded at the time
of
processing? Or does the algorithm require full loading of the video
to jump
around within it?Also some process in 2 minutes whilst others take 150 minutes.
I'm using density=50.
—
Reply to this email directly or view it on GitHub
#18.—
Reply to this email directly or view it on GitHub
#18 (comment).Best Regards,
Eamonn KennyDr. Eamonn Kenny | Research Fellow
ADAPT Centre
O'Reilly Building Phone: +353 (0) 1 896 1335
Trinity College Dublin Mobile: +353 (0) 86 309 3967
Dublin 2 E-mail: [email protected]
Ireland www.adaptcentre.ie http://www.adaptcentre.ie/
https://twitter.com/adaptcentre<
https://www.facebook.com/ADAPTCentre?fref=ts><
https://www.youtube.com/channel/UC9--qVutTtyLyhZCJR7rY5g><
https://www.linkedin.com/company/adapt-centre>—
Reply to this email directly or view it on GitHub
#18 (comment).
from audfprint.
Good to know, thanks. I tried one previously with the default of 16384
and it worked without any problem. I ran the match query with
--min-count 20 --max-matches 100 and --match-win 2 and found that the
result was very exact.
On 29/01/16 14:58, Dan Ellis wrote:
To get accurate time indices within 1hr chunks you'll need to increase
--maxtime 262144 or something. By default (--maxtime 16384) it aliases on
a 6 minute window.DAn.
On Fri, Jan 29, 2016 at 9:53 AM, eamonnkenny [email protected]
wrote:Thanks Dan,
1hr chunks is probably achievable and I can throw away the video for the
analysis and then glean it back at the end using avconv and mediainfo.
Basically, I'm looking for a parliamentary talk in a 24hr session. To
find it I split each 24hr into 1hr sessions, then check the snippets to
find their start time. If then exist in say Dail_20060208-11.wav at time
135.2 seconds then I just use: 3600 * 11 + 135.2 to get the state time
and pymediainfo will give the duration of any snippet file using:duration = float( pymediainfo.MediaInfo.parse( snippetFile
).tracks[0].duration ) / 1000.0I'll see how I get on. Thanks for your help.
On 29/01/16 11:53, Dan Ellis wrote:
The system is designed for reference items of a few minutes and
queries
of
tens of seconds. It will work with files an hour long, but beyond
that I
don't know. Can you break your files into 1 hour chunks?DAn.
On Friday, January 29, 2016, eamonnkenny [email protected]
wrote:The software seems to take over my whole Debian Jessie dual quad
core
machine (Intel i7) when performing a pre compute on a 24 hour video
obtained from http://oireachtasdebates.oireachtas.ie/ with ncores=1
files
are about 4-6.4Gb in size but should they be fully loaded at the
time
of
processing? Or does the algorithm require full loading of the video
to jump
around within it?Also some process in 2 minutes whilst others take 150 minutes.
I'm using density=50.
—
Reply to this email directly or view it on GitHub
#18.—
Reply to this email directly or view it on GitHub
#18 (comment).Best Regards,
Eamonn KennyDr. Eamonn Kenny | Research Fellow
ADAPT Centre
O'Reilly Building Phone: +353 (0) 1 896 1335
Trinity College Dublin Mobile: +353 (0) 86 309 3967
Dublin 2 E-mail: [email protected]
Ireland www.adaptcentre.ie http://www.adaptcentre.ie/
https://twitter.com/adaptcentre<
https://www.facebook.com/ADAPTCentre?fref=ts><
https://www.youtube.com/channel/UC9--qVutTtyLyhZCJR7rY5g><
https://www.linkedin.com/company/adapt-centre>—
Reply to this email directly or view it on GitHub
#18 (comment).—
Reply to this email directly or view it on GitHub
#18 (comment).
Best Regards,
Eamonn Kenny
*Dr. Eamonn Kenny* | Research Fellow
ADAPT Centre
O'Reilly Building Phone: +353 (0) 1 896 1335
Trinity College Dublin Mobile: +353 (0) 86 309 3967
Dublin 2 E-mail: [email protected]
Ireland www.adaptcentre.ie http://www.adaptcentre.ie/
https://twitter.com/adaptcentrehttps://www.facebook.com/ADAPTCentre?fref=tshttps://www.youtube.com/channel/UC9--qVutTtyLyhZCJR7rY5ghttps://www.linkedin.com/company/adapt-centre
from audfprint.
Hi Dan,
When I go for maxtime = 262144 must this value to be built into the
precomputed large files, the smaller file ingestion or just the matching
at the end, or all 3?
On 29/01/16 14:58, Dan Ellis wrote:
To get accurate time indices within 1hr chunks you'll need to increase
--maxtime 262144 or something. By default (--maxtime 16384) it aliases on
a 6 minute window.DAn.
On Fri, Jan 29, 2016 at 9:53 AM, eamonnkenny [email protected]
wrote:Thanks Dan,
1hr chunks is probably achievable and I can throw away the video for the
analysis and then glean it back at the end using avconv and mediainfo.
Basically, I'm looking for a parliamentary talk in a 24hr session. To
find it I split each 24hr into 1hr sessions, then check the snippets to
find their start time. If then exist in say Dail_20060208-11.wav at time
135.2 seconds then I just use: 3600 * 11 + 135.2 to get the state time
and pymediainfo will give the duration of any snippet file using:duration = float( pymediainfo.MediaInfo.parse( snippetFile
).tracks[0].duration ) / 1000.0I'll see how I get on. Thanks for your help.
On 29/01/16 11:53, Dan Ellis wrote:
The system is designed for reference items of a few minutes and
queries
of
tens of seconds. It will work with files an hour long, but beyond
that I
don't know. Can you break your files into 1 hour chunks?DAn.
On Friday, January 29, 2016, eamonnkenny [email protected]
wrote:The software seems to take over my whole Debian Jessie dual quad
core
machine (Intel i7) when performing a pre compute on a 24 hour video
obtained from http://oireachtasdebates.oireachtas.ie/ with ncores=1
files
are about 4-6.4Gb in size but should they be fully loaded at the
time
of
processing? Or does the algorithm require full loading of the video
to jump
around within it?Also some process in 2 minutes whilst others take 150 minutes.
I'm using density=50.
—
Reply to this email directly or view it on GitHub
#18.—
Reply to this email directly or view it on GitHub
#18 (comment).Best Regards,
Eamonn KennyDr. Eamonn Kenny | Research Fellow
ADAPT Centre
O'Reilly Building Phone: +353 (0) 1 896 1335
Trinity College Dublin Mobile: +353 (0) 86 309 3967
Dublin 2 E-mail: [email protected]
Ireland www.adaptcentre.ie http://www.adaptcentre.ie/
https://twitter.com/adaptcentre<
https://www.facebook.com/ADAPTCentre?fref=ts><
https://www.youtube.com/channel/UC9--qVutTtyLyhZCJR7rY5g><
https://www.linkedin.com/company/adapt-centre>—
Reply to this email directly or view it on GitHub
#18 (comment).—
Reply to this email directly or view it on GitHub
#18 (comment).
Best Regards,
Eamonn Kenny
*Dr. Eamonn Kenny* | Research Fellow
ADAPT Centre
O'Reilly Building Phone: +353 (0) 1 896 1335
Trinity College Dublin Mobile: +353 (0) 86 309 3967
Dublin 2 E-mail: [email protected]
Ireland www.adaptcentre.ie http://www.adaptcentre.ie/
https://twitter.com/adaptcentrehttps://www.facebook.com/ADAPTCentre?fref=tshttps://www.youtube.com/channel/UC9--qVutTtyLyhZCJR7rY5ghttps://www.linkedin.com/company/adapt-centre
from audfprint.
The maxtime parameter has to be specified at the time when the database
file is first created. After that, it is read from the database file.
DAn.
On Mon, Feb 1, 2016 at 7:26 AM, eamonnkenny <[email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');> wrote:
Hi Dan,
When I go for maxtime = 262144 must this value to be built into the
precomputed large files, the smaller file ingestion or just the matching
at the end, or all 3?On 29/01/16 14:58, Dan Ellis wrote:
To get accurate time indices within 1hr chunks you'll need to increase
--maxtime 262144 or something. By default (--maxtime 16384) it aliases on
a 6 minute window.DAn.
On Fri, Jan 29, 2016 at 9:53 AM, eamonnkenny <[email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');>
wrote:Thanks Dan,
1hr chunks is probably achievable and I can throw away the video for
the
analysis and then glean it back at the end using avconv and mediainfo.
Basically, I'm looking for a parliamentary talk in a 24hr session. To
find it I split each 24hr into 1hr sessions, then check the snippets to
find their start time. If then exist in say Dail_20060208-11.wav at
time
135.2 seconds then I just use: 3600 * 11 + 135.2 to get the state time
and pymediainfo will give the duration of any snippet file using:duration = float( pymediainfo.MediaInfo.parse( snippetFile
).tracks[0].duration ) / 1000.0I'll see how I get on. Thanks for your help.
On 29/01/16 11:53, Dan Ellis wrote:
The system is designed for reference items of a few minutes and
queries
of
tens of seconds. It will work with files an hour long, but beyond
that I
don't know. Can you break your files into 1 hour chunks?DAn.
On Friday, January 29, 2016, eamonnkenny <[email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');>
wrote:The software seems to take over my whole Debian Jessie dual quad
core
machine (Intel i7) when performing a pre compute on a 24 hour video
obtained from http://oireachtasdebates.oireachtas.ie/ with
ncores=1
files
are about 4-6.4Gb in size but should they be fully loaded at the
time
of
processing? Or does the algorithm require full loading of the video
to jump
around within it?Also some process in 2 minutes whilst others take 150 minutes.
I'm using density=50.
—
Reply to this email directly or view it on GitHub
#18.—
Reply to this email directly or view it on GitHub
<#18 (comment)
.Best Regards,
Eamonn KennyDr. Eamonn Kenny | Research Fellow
ADAPT Centre
O'Reilly Building Phone: +353 (0) 1 896 1335
Trinity College Dublin Mobile: +353 (0) 86 309 3967
Dublin 2 E-mail: [email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');
Ireland www.adaptcentre.ie http://www.adaptcentre.ie/
https://twitter.com/adaptcentre<
https://www.facebook.com/ADAPTCentre?fref=ts><
https://www.youtube.com/channel/UC9--qVutTtyLyhZCJR7rY5g><
https://www.linkedin.com/company/adapt-centre>—
Reply to this email directly or view it on GitHub
#18 (comment).—
Reply to this email directly or view it on GitHub
#18 (comment).Best Regards,
Eamonn KennyDr. Eamonn Kenny | Research Fellow
ADAPT Centre
O'Reilly Building Phone: +353 (0) 1 896 1335
Trinity College Dublin Mobile: +353 (0) 86 309 3967
Dublin 2 E-mail: [email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');
Ireland www.adaptcentre.ie http://www.adaptcentre.ie/
https://twitter.com/adaptcentre<
https://www.facebook.com/ADAPTCentre?fref=ts><
https://www.youtube.com/channel/UC9--qVutTtyLyhZCJR7rY5g><
https://www.linkedin.com/company/adapt-centre>—
Reply to this email directly or view it on GitHub
#18 (comment).
from audfprint.
Related Issues (20)
- Incorrect time range HOT 1
- Incorrect Time range. HOT 1
- Convert .afpk files to mp3? HOT 1
- Problem with "spreadpeaksinvector"
- Reduce memory usage HOT 2
- Use audfprint as a module
- How to increase bits for storing IDs and timestamp?
- illustrate
- Scan every folder in every fingerprint base named as folder HOT 3
- UNICODE chaaracters ERROR HOT 11
- Show more than 1 finded matched names in results. HOT 4
- Can this algorithm load the historical features into memory first, so that the matching speed is improved, but I don't know how to modify your basic code HOT 7
- Hello, if the song is 1 million (the duration of the song is about 3 minutes), the matching time is very slow, how to optimize it? HOT 1
- I found that according to The time and hash generated by audfprint are not continuous in time, which led me to use the binary method to search for the same hash, conduct a statistical ranking of the number of hashes, and search for the original version of individual audio (the climax audio), and the ranking is not the first.
- I found that some match matches are inaccurate. I want to know, is the time and hash generated by this audfprint continuous at the start-end time, or is it a peak value, which leads to the fact that if one song is 1 minute, another song 3 minutes, there are a lot of hashes in the previous minute in the last two minutes of the next song, which leads to the fact that the hash statistics are larger than one minute, resulting in inaccurate sorting, then the match situation is inaccurate, how to optimize this? HOT 2
- output matching different between windows and linux. I created a database of filehashes which is around 340MB. When i try to query around 100 mp3 files my output is different between my windows machine and my raspberri pi. The database is identical, the query is identical but the windows machine finds significantly more matches. Both are running python 3.9. Database was created on the windows machine and transfered to the pi. Anyone encountered something similar? HOT 7
- Question about concatenate afpk files HOT 2
- How to avoid big % Dropped HOT 5
- Can someone take on audfprint-gui for audfprint or create a new gui for it? HOT 2
- Ability to split pklzs into smaller sizes
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from audfprint.