arikrupnik / ltcsync Goto Github PK
View Code? Open in Web Editor NEWSyncs media files using embedded timecode (LTC)
License: GNU General Public License v3.0
Syncs media files using embedded timecode (LTC)
License: GNU General Public License v3.0
It can simplify post workflows for Final Cut and Premiere users if we could generate XML timelines which they could import directly into the NLE instead of using unfamiliar padding files.
DaVinci Resolve offers valuable insight into timeline exports, and ready examples of such files. I include here two such files, FCP XML (ironically for Premiere) and FCPXML for Final Cut Pro X. Both files contain the exact same multicam timeline, with two .mov files in it.
file.fpcxml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE fcpxml>
<fcpxml version="1.8">
<resources>
<format width="1920" frameDuration="1001/24000s" id="r0" height="1080" name="FFVideoFormat1080p2398"/>
<format width="1920" frameDuration="1001/30000s" id="r1" height="1080" name="FFVideoFormat1080p2997"/>
<asset start="0/1s" duration="2221219/30000s" audioSources="1" id="r2" audioChannels="2" hasVideo="1" src="file:///media/ari/heir/dish-testimonials-ks1/EricJewell.2.prores.mov" hasAudio="1" format="r1" name="EricJewell.2.prores.mov"/>
<asset start="0/1s" duration="339339/2000s" audioSources="1" id="r3" audioChannels="2" hasVideo="1" src="file:///media/ari/heir/dish-testimonials-ks1/AdamMoffat.prores.mov" hasAudio="1" format="r0" name="AdamMoffat.prores.mov"/>
</resources>
<library>
<event name="Timeline 2 (Resolve)">
<project name="Timeline 2 (Resolve)">
<sequence duration="339339/2000s" tcStart="18018/5s" format="r0" tcFormat="NDF">
<spine>
<gap start="3600/1s" duration="73073/1500s" offset="18018/5s" name="Gap">
<asset-clip start="0/1s" duration="339339/2000s" ref="r3" enabled="1" offset="3600/1s" format="r0" tcFormat="NDF" lane="1" name="AdamMoffat.prores.mov">
<adjust-transform anchor="0 0" scale="1 1" position="0 0"/>
</asset-clip>
</gap>
<asset-clip start="0/1s" duration="71071/960s" ref="r2" enabled="1" offset="5478473/1500s" format="r1" tcFormat="NDF" name="EricJewell.2.prores.mov">
<adjust-transform anchor="0 0" scale="1 1" position="0 0"/>
</asset-clip>
<gap start="3600/1s" duration="3003/64s" offset="29810781/8000s" name="Gap"/>
</spine>
</sequence>
</project>
</event>
</library>
</fcpxml>
file.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xmeml>
<xmeml version="5">
<sequence>
<name>Timeline 2 (Resolve)</name>
<duration>4067</duration>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
<in>-1</in>
<out>-1</out>
<timecode>
<string>01:00:00:00</string>
<frame>86400</frame>
<displayformat>NDF</displayformat>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
</timecode>
<media>
<video>
<track>
<clipitem id="EricJewell.2.prores.mov 0">
<name>EricJewell.2.prores.mov</name>
<duration>1775</duration>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
<start>1168</start>
<end>2943</end>
<enabled>TRUE</enabled>
<in>0</in>
<out>1775</out>
<file id="EricJewell.2.prores.mov 2">
<duration>2219</duration>
<rate>
<timebase>30</timebase>
<ntsc>TRUE</ntsc>
</rate>
<name>EricJewell.2.prores.mov</name>
<pathurl>file:///media/ari/heir/dish-testimonials-ks1/EricJewell.2.prores.mov</pathurl>
<timecode>
<string>00:00:00:00</string>
<displayformat>NDF</displayformat>
<rate>
<timebase>30</timebase>
<ntsc>TRUE</ntsc>
</rate>
</timecode>
<media>
<video>
<duration>2219</duration>
<samplecharacteristics>
<width>1920</width>
<height>1080</height>
</samplecharacteristics>
</video>
<audio>
<channelcount>2</channelcount>
</audio>
</media>
</file>
<compositemode>normal</compositemode>
<filter>
<enabled>TRUE</enabled>
<start>0</start>
<end>1775</end>
<effect>
<name>Opacity</name>
<effectid>opacity</effectid>
<effecttype>motion</effecttype>
<mediatype>video</mediatype>
<effectcategory>motion</effectcategory>
<parameter>
<name>opacity</name>
<parameterid>opacity</parameterid>
<value>100</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
</effect>
</filter>
<filter>
<enabled>TRUE</enabled>
<start>0</start>
<end>1775</end>
<effect>
<name>Basic Motion</name>
<effectid>basic</effectid>
<effecttype>motion</effecttype>
<mediatype>video</mediatype>
<effectcategory>motion</effectcategory>
<parameter>
<name>Scale</name>
<parameterid>scale</parameterid>
<value>100</value>
<valuemin>0</valuemin>
<valuemax>10000</valuemax>
</parameter>
<parameter>
<name>Center</name>
<parameterid>center</parameterid>
<value>
<horiz>0</horiz>
<vert>0</vert>
</value>
</parameter>
<parameter>
<name>Rotation</name>
<parameterid>rotation</parameterid>
<value>0</value>
<valuemin>-100000</valuemin>
<valuemax>100000</valuemax>
</parameter>
<parameter>
<name>Anchor Point</name>
<parameterid>centerOffset</parameterid>
<value>
<horiz>0</horiz>
<vert>0</vert>
</value>
</parameter>
</effect>
</filter>
<filter>
<enabled>TRUE</enabled>
<start>0</start>
<end>1775</end>
<effect>
<name>Crop</name>
<effectid>crop</effectid>
<effecttype>motion</effecttype>
<mediatype>video</mediatype>
<effectcategory>motion</effectcategory>
<parameter>
<name>left</name>
<parameterid>left</parameterid>
<value>0</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
<parameter>
<name>right</name>
<parameterid>right</parameterid>
<value>0</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
<parameter>
<name>top</name>
<parameterid>top</parameterid>
<value>0</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
<parameter>
<name>bottom</name>
<parameterid>bottom</parameterid>
<value>0</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
</effect>
</filter>
<link>
<linkclipref>EricJewell.2.prores.mov 0</linkclipref>
</link>
<link>
<linkclipref>EricJewell.2.prores.mov 3</linkclipref>
</link>
</clipitem>
<enabled>TRUE</enabled>
<locked>FALSE</locked>
</track>
<track>
<clipitem id="AdamMoffat.prores.mov 0">
<name>AdamMoffat.prores.mov</name>
<duration>4068</duration>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
<start>0</start>
<end>4068</end>
<enabled>TRUE</enabled>
<in>0</in>
<out>4068</out>
<file id="AdamMoffat.prores.mov 2">
<duration>4068</duration>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
<name>AdamMoffat.prores.mov</name>
<pathurl>file:///media/ari/heir/dish-testimonials-ks1/AdamMoffat.prores.mov</pathurl>
<timecode>
<string>00:00:00:00</string>
<displayformat>NDF</displayformat>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
</timecode>
<media>
<video>
<duration>4068</duration>
<samplecharacteristics>
<width>1920</width>
<height>1080</height>
</samplecharacteristics>
</video>
<audio>
<channelcount>2</channelcount>
</audio>
</media>
</file>
<compositemode>normal</compositemode>
<filter>
<enabled>TRUE</enabled>
<start>0</start>
<end>4068</end>
<effect>
<name>Opacity</name>
<effectid>opacity</effectid>
<effecttype>motion</effecttype>
<mediatype>video</mediatype>
<effectcategory>motion</effectcategory>
<parameter>
<name>opacity</name>
<parameterid>opacity</parameterid>
<value>100</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
</effect>
</filter>
<filter>
<enabled>TRUE</enabled>
<start>0</start>
<end>4068</end>
<effect>
<name>Basic Motion</name>
<effectid>basic</effectid>
<effecttype>motion</effecttype>
<mediatype>video</mediatype>
<effectcategory>motion</effectcategory>
<parameter>
<name>Scale</name>
<parameterid>scale</parameterid>
<value>100</value>
<valuemin>0</valuemin>
<valuemax>10000</valuemax>
</parameter>
<parameter>
<name>Center</name>
<parameterid>center</parameterid>
<value>
<horiz>0</horiz>
<vert>0</vert>
</value>
</parameter>
<parameter>
<name>Rotation</name>
<parameterid>rotation</parameterid>
<value>0</value>
<valuemin>-100000</valuemin>
<valuemax>100000</valuemax>
</parameter>
<parameter>
<name>Anchor Point</name>
<parameterid>centerOffset</parameterid>
<value>
<horiz>0</horiz>
<vert>0</vert>
</value>
</parameter>
</effect>
</filter>
<filter>
<enabled>TRUE</enabled>
<start>0</start>
<end>4068</end>
<effect>
<name>Crop</name>
<effectid>crop</effectid>
<effecttype>motion</effecttype>
<mediatype>video</mediatype>
<effectcategory>motion</effectcategory>
<parameter>
<name>left</name>
<parameterid>left</parameterid>
<value>0</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
<parameter>
<name>right</name>
<parameterid>right</parameterid>
<value>0</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
<parameter>
<name>top</name>
<parameterid>top</parameterid>
<value>0</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
<parameter>
<name>bottom</name>
<parameterid>bottom</parameterid>
<value>0</value>
<valuemin>0</valuemin>
<valuemax>100</valuemax>
</parameter>
</effect>
</filter>
<link>
<linkclipref>AdamMoffat.prores.mov 0</linkclipref>
</link>
<link>
<linkclipref>AdamMoffat.prores.mov 3</linkclipref>
</link>
</clipitem>
<enabled>TRUE</enabled>
<locked>FALSE</locked>
</track>
<format>
<samplecharacteristics>
<width>1920</width>
<height>1080</height>
<pixelaspectratio>square</pixelaspectratio>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
<codec>
<appspecificdata>
<appname>Final Cut Pro</appname>
<appmanufacturer>Apple Inc.</appmanufacturer>
<data>
<qtcodec/>
</data>
</appspecificdata>
</codec>
</samplecharacteristics>
</format>
</video>
<audio>
<track>
<clipitem id="EricJewell.2.prores.mov 3">
<name>EricJewell.2.prores.mov</name>
<duration>1775</duration>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
<start>1168</start>
<end>2943</end>
<enabled>TRUE</enabled>
<in>0</in>
<out>1775</out>
<file id="EricJewell.2.prores.mov 2"/>
<sourcetrack>
<mediatype>audio</mediatype>
<trackindex>1</trackindex>
</sourcetrack>
<link>
<linkclipref>EricJewell.2.prores.mov 0</linkclipref>
</link>
<link>
<linkclipref>EricJewell.2.prores.mov 3</linkclipref>
</link>
</clipitem>
<enabled>TRUE</enabled>
<locked>FALSE</locked>
</track>
<track>
<clipitem id="AdamMoffat.prores.mov 3">
<name>AdamMoffat.prores.mov</name>
<duration>4068</duration>
<rate>
<timebase>24</timebase>
<ntsc>TRUE</ntsc>
</rate>
<start>0</start>
<end>4068</end>
<enabled>TRUE</enabled>
<in>0</in>
<out>4068</out>
<file id="AdamMoffat.prores.mov 2"/>
<sourcetrack>
<mediatype>audio</mediatype>
<trackindex>1</trackindex>
</sourcetrack>
<link>
<linkclipref>AdamMoffat.prores.mov 0</linkclipref>
</link>
<link>
<linkclipref>AdamMoffat.prores.mov 3</linkclipref>
</link>
</clipitem>
<enabled>TRUE</enabled>
<locked>FALSE</locked>
</track>
</audio>
</media>
</sequence>
</xmeml>
Right now, the Makefile builds LTCsync for all supported platforms. While the goal is to continue to support cross-platform builds, it is helpful to allow users to build only for their OS, and to select that OS automatically, perhaps by evaluating node -p 'process.platform'
and node -p 'process.arch'
.
ffmpeg
interprets some still image formats such as JPG and PNG as sequences of frames of length 1. This puts such files in the Files without embedded timecode
bucket instead of ignoring them as irrelevant to sync.
Some of these don't have a duration
or duration_ts
field, and are easy to filter out. Perhaps the more general solution is to require more than one frame for files with a single video stream.
Examples:
{
"streams": [
{
"index": 0,
"codec_name": "png",
"codec_long_name": "PNG (Portable Network Graphics) image",
"codec_type": "video",
"codec_time_base": "0/1",
"codec_tag_string": "[0][0][0][0]",
"codec_tag": "0x0000",
"width": 1920,
"height": 1080,
"coded_width": 1920,
"coded_height": 1080,
"has_b_frames": 0,
"sample_aspect_ratio": "1:1",
"display_aspect_ratio": "16:9",
"pix_fmt": "rgba",
"level": -99,
"color_range": "pc",
"refs": 1,
"r_frame_rate": "25/1",
"avg_frame_rate": "0/0",
"time_base": "1/25",
"disposition": {
"default": 0,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0
}
}
],
"format": {
"filename": "xxxxxx.png",
"nb_streams": 1,
"nb_programs": 0,
"format_name": "png_pipe",
"format_long_name": "piped png sequence",
"size": "81020",
"probe_score": 99
}
}
{
"streams": [
{
"index": 0,
"codec_name": "mjpeg",
"codec_long_name": "Motion JPEG",
"profile": "194",
"codec_type": "video",
"codec_time_base": "0/1",
"codec_tag_string": "[0][0][0][0]",
"codec_tag": "0x0000",
"width": 960,
"height": 240,
"coded_width": 960,
"coded_height": 240,
"has_b_frames": 0,
"sample_aspect_ratio": "1:1",
"display_aspect_ratio": "4:1",
"pix_fmt": "gray",
"level": -99,
"color_space": "bt470bg",
"chroma_location": "center",
"refs": 1,
"r_frame_rate": "25/1",
"avg_frame_rate": "0/0",
"time_base": "1/25",
"start_pts": 0,
"start_time": "0.000000",
"duration_ts": 1,
"duration": "0.040000",
"bits_per_raw_sample": "8",
"disposition": {
"default": 0,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0
}
}
],
"format": {
"filename": "xxxxxx.jpg",
"nb_streams": 1,
"nb_programs": 0,
"format_name": "image2",
"format_long_name": "image2 sequence",
"start_time": "0.000000",
"duration": "0.040000",
"size": "38873",
"bit_rate": "7774600",
"probe_score": 50
}
}
{
"streams": [
{
"index": 0,
"codec_name": "ansi",
"codec_long_name": "ASCII/ANSI art",
"codec_type": "video",
"codec_time_base": "1/25",
"codec_tag_string": "[0][0][0][0]",
"codec_tag": "0x0000",
"width": 640,
"height": 400,
"coded_width": 640,
"coded_height": 400,
"has_b_frames": 0,
"pix_fmt": "pal8",
"level": -99,
"refs": 1,
"r_frame_rate": "25/1",
"avg_frame_rate": "25/1",
"time_base": "1/25",
"duration_ts": 1,
"duration": "0.040000",
"disposition": {
"default": 0,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0
}
}
],
"format": {
"filename": "xxxxxx.txt",
"nb_streams": 1,
"nb_programs": 0,
"format_name": "tty",
"format_long_name": "Tele-typewriter",
"duration": "0.040000",
"size": "143",
"bit_rate": "28600",
"probe_score": 50
}
}
Provide, as part of the build process, installers for Linux, Windows and MacOS.
It is desirable to allow LTCsync to run with command-line arguments.
File arguments can pre-populate the UI. This is especially relevant while LTCsync is in read-only stage of development. Eventually, it would be desirable to let the program run without a UI at all. This would allow users to incorporate it more easily into their automated workflows.
It is desirable to assign a formal quality score to LTC tracks to help chose which stream to use, if at all, for sync.
Sometimes, random noise may look like LTC frames. Sometimes, LTC can bleed from one channel to another with unbalanced connections. Sometimes the audio hass too much noise that garbles LTC. Sometimes LTC generators do strange things.
For highest score, LTC streams would have:
It is yet unclear what relative weights we want to assign to these heuristics.
A common use case is using a sound recorder to jam-sync Tentacle(s) which then feed LTC into audio inputs on cameras. The resulting file set has video file(s) with LTC and an audio file without. The audio file has metadata in the the format that the specific recorder uses.
It is desirable, for files that have no LTC, to examine the metadata for timestamps.
It is possible to pipe output of ffmpeg
directly into ltcdump
and avoid writing and deleting temporary WAV files.
The build fails on Windows Server. Two aspects seem different from desktop installations: an executable needs to be on the PATH to run, otherwise execution aborts with Error -1073741515
. This is relatively easy to solve, though clunky, by manipulating $PATH. The bigger problem is that while ltcdump runs with the right $PATH, ffmpeg execution fails with error while loading shared libraries: AVICAP32.dll
. This appears to be a DLL that is part of the Desktop-Experience package which is unavailable in Windows Server.
Hello @arikrupnik, I just tried syncing some files that don't all have audio TC and noticed that Meta Data Timecode information is not read for sync purposes?
Usecase:
Current Situation:
LTCsync only synchronizes the audio TC channels. But audio recorders for example would then need to also record a audio TC track which is kind of redundant I would say.
I am not sure how this would be "easily" implemented. One would have to figure out how the metadata TC data is stored in different file types (audio files vs video files etc…).
It is desirable to run unit tests on every commit, using e.g. Travis.
Files in /libLTC come with unit tests. Running $node libltc/module.js
runs the tests for that module. Many of these tests depend on sample audio or video files, some of which are quite large. Currently, the samples take up 150Mb. It feels wrong to check this much binary into the repository. One solution is to upload the files to an FTP server and download them as a dependency for the build.
Files that run directly in Electron index.js
, ltcsync.js
need unit tests added. Running Electron code in a headless environment needs special consideration.
Please leave your feedback on the app as a comment on this issue.
If you find a bug, or something that doesn't work how you expect it, feel free to open a new issue.
Small thing, but noticeable: the default icon.
It is desirable to add an application icon to the project, to show on the desktop shortcut and in the corner of a running window.
Hi there, your software is really cool !
Anyway - as some commercial products (such as Red Giant PluralEyes or Syncalia) does, it would be great to enable some kind of synchronization trough audio alignment.
Here's some interesting open projects that may help:
Hope that will inspire.
Resolve can parse LTC from video files, but ignores LTC in audio files.
LTCsync can wrap audio streams from an audio files in a video container, allowing Resolve to parse the timecode from that.
Need an intermediate object between editing session/group and file. Some audio recorders and cameras record different audio tracks on separate files, e.g.:
ZOOM0004_LR.WAV
ZOOM0004_Tr1.WAV
ZOOM0004_Tr2.WAV
libSync/sessions.js
already provides the function from_same_recording_session()
. EditingSession.add_file()
needs to call this function and add relevant files to relevant recording sessions. One way to implement this API is to have an array of ffprobe
s per file, instead of a scalar.
Hello there, I just tried out this tool and wondered why my timecode would not work as the samples do.
I am trying to understand how it all works and also how you implemented things. Also would love to use it for my projects.
Anyways, here is what is happening:
I also have tried the same thing with files recorded to the Atomos Ninja V with the TC coming from F6 to EOS R to Atomos Recorder via HDMI.
Any idea here?
Am I missing something?
A setting maybe?
Because I am seeing the Zoom files in the samples folder that apparently got tracked correctly?
I hope we can figure this out.
Cheers, Chris
Right now, error messages scroll by and disappear after 2 seconds. It is desirable to retain them for the session, and give users the opportunity to share them in bug reports.
For groups of overlapping files, it is desirable to display the offset of each file from the start of the group. Users can use this to arrange files manually on an NLE tilmeline until a we can ship a more comprehensive solution.
License, copyright, website, etc.
It may be a wine
problem. electron-packager
is the command that hangs.
Long files (> 1 hour) take unacceptably long to process. Consider examining only the first and last 10 minutes of audio for LTC, and show progress bar to let user know processing is in progress.
LTCsync has dependencies on three binary executables, ffprobe(1)
, ffmpeg(1)
and ltcdump(1)
. Binary distribution must include platform specific copies of these binaries. There is no need to compile (cross-compile) binaries on every build. In fact, the ffmpeg project makes static binaries available at https://www.ffmpeg.org/download.html. For ltcdump, we may need to compile binaries ourselves for each platform we support. Since a full build must produce binaries for all platforms, and since the upstream changes infrequently, we may want to keep static binaries for ltcdump binaries instead of cross-compiling them on every build.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.