etro-js / etro Goto Github PK
View Code? Open in Web Editor NEWTypescript video-editing framework for the browser
Home Page: https://etrojs.dev
License: GNU General Public License v3.0
Typescript video-editing framework for the browser
Home Page: https://etrojs.dev
License: GNU General Public License v3.0
Currently, val()
(found in util.ts) accepts a time
argument, the current time of the movie relative to the provided vidar element. This is redundant, since every vidar element implements a currentTime
property. We need to remove this argument of val
and replace its usages with element.currentTime
.
Relevant Code:
Add an option to specify the output blob format for Movie#record
. Right now, it is hardcoded as video/webm.
Example
https://youtu.be/T1OHFY-lXEM
mediaStartTime = 8, but on pause media start at 0
The Gaussian blur effect makes the target much brighter if the radius is greater than zero. If it's zero, it makes the target darker. Of course this effect should not change the brightness / total value of the pixels.
Steps to reproduce:
npm start
to start the development serverKeeping /examples/application/webcam.html open in the browser for a several minutes freezes my linux laptop on FF. It doesn't happen on my desktop (even though my laptop is a gaming laptop).
TODO: investigate
Blocked by #30
npm start
If you want I can write a TypeScript type declaration file for this package so that TS developers can utilise the type-checking feature of the language as they use this package. Let me know if you are interested.
After a movie's currentTime
property is set and its play
method called (in either order), the following occurs:
On FF, AbortError: The operation was aborted.
is thrown; and Chrome's behavior doesn't make sense to me.
More information needed.
As seen in examples/introduction/keyframes.html, text interpolation looks very choppy currently. I suspect this has to do with some font size rounding done by the browser. TODO: Investigate
Hello, after many tries, i figured out, that we can't resize a picture, for example if the canvas is 600x600 and picture is 1024x1024, if we loaded the picture without any options, it will cropped automatically, only a part of the image will be shown. and if we set the original height and the original width in height and width options the image will load correctly but will not fit the canvas, i tried to resize the image inside canvas but nothing is working.
video.addLayer(new vd.layer.Image( 0, 5, document.getElementById('img'), { x:15, y:15, width :1024, height :1024, } ));
Thank you.
Hello, i tried to export the movie to an MP4 codec H264 MPEG-4 AVC but nothing works. I always get WEBM file with corrupted codec. I tried to use an external library to convert BLOB to mp4 video but also didnt work. Do you have any idea ?
Already implemented:
movie.timeupdate
- when currentTime
changes naturallymovie.seek
- when currentTime
is set externallymovie.end
- when the cursor (currentTime
) reaches the end of the moviemovie.loadeddata
- when the current frame is fully loaded (all video elements' readyState
>= 2)Pick one of these to implement:
movie.pause
- when pause()
is calledmovie.play
- when play()
is calledmovie.record
- when record()
is calledmovie.durationchange
- (when duration
changes)Related:
https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Media_events
Port the tests in spec
to typescript.
I'm unable to test some of the examples because files such as:
/examples/assets/lake.jpg
/examples/assets/desert.mp4
Are missing. Can they be included in the source?
Thanks
Design and implement five more examples for the examples/application directory. These examples will be instances of how Vidar can be used.
Right now, events layer.start
and layer.stop
are being published when a layer becomes active or inactive on the movie's timeline. The user shouldn't be able to be subscribed to these events, but only the layers should be notified when these things happen, so they should be refactored to methods of the base layer. The movie will then call them.
layer.Base#start() -> void
layer.Base#stop() -> void
Hi, I'm not sure why the media width and height cannot be edited. I've tried the following way, but still unable to change the size of the video on the canvas.
movie
.addLayer(new vd.layer.Video(0, video, {mediaWidth : cv.width, mediaHeight : cv.height}))
or
movie
.addLayer(new vd.layer.Video(0, video, {mediaWidth : 1280, mediaHeight : 720}))
Add a property called defaultKeyframeInterpolate
(for example) to movie.js: Movie
, which is initialized to the global default keyframe interpolation method, util.js: linearInterp
. Use this value in util.js: val
as the new default value.
Steps to reproduce:
npm i
npm start
Here is a simpler example (place one level beneath examples/, like examples/test).
This does not seem to happen with other other visual layers, just video layers.
Right now, when a media (or image) layer is created, its mediaWidth
and mediaHeight
are set to the respective dimensions of its media (if not already defined). Ideally, if the dimensions are not set when a media layer is created, they should be left undefined. Then, when you need to use them, width
and height
are used as fallback values.
Additionally, mediaWidth
should not be set to the same value as width
when the media is loaded (like in the constructor of Video
).
Gaussian blur is currently implemented, but the following blur effects are not (in src/effect.js
):
Any other blur effect suggestions?
Hey, I'm writing an app that selects a 5 second section of a video and then encodes/exports it as a webm.
I couldn't find a layer that accepts a duration range. So i shifted my focus onto the movie.record()
function.
Am I correct in assuming that the easiest way to add sub-section recording is to
Add a start
and end
options to record()
and then
patch the function to start the background-movie playback at an offset using
this.setCurrentTime(options.start)
and then prematurely end the recording procedure on an "movie.timeupdate"
event when currentTime
exceeds options.end
?
Currently, only integer values for the radius of GaussianBlur
are allowed. It would be nice to allow floats, but the current implementation creates a Gaussian kernel from the radius. We need to find an implementation to support non-integer radius values.
If Infinity
is passed to a layer's duration, never end.
Notes
Infinity
.layer.startTime + layer.duration
(where layer
is has infinite duration), that layer will never be active.Movie.render
's layer-deactivate check and its window.requestAnimationFrame
(repeat) check to make sure both the layer is active and the movie is playing indefinitely.util.val
is used to sample a property, using keyframes, functions, or just returning a single value.
Since the syntax is gross, let's replace it with setters and getters, which use a similar util.val
function (to prevent repeated code). Besides that, and the fact that each value should be cached per frame (I will make this a new issue), the details are left open for now.
As a side benefit, properties like layer.width
and layer.height
can default to the movie's width/height right in the getter, or perform similar behavior (instead of doing it manually with every query). More importantly, caching eliminates the bug of multiple different values from function properties per frame.
get currentTime() {
return this.active ? (this._movie.currentTime - this.startTime) : null;
}
And update a lot of methods all over the place to not take reltime
as an argument, but get it from this property, or Movie
's currentTime
.
Running on the GPU will make things a lot more efficient; movie.js's current implementation of visual effects is very inefficient. The two options are to implement this manually (the benefit is control) or to use an existing library, such as gpu.js (the benefit is not reinventing the wheel).
How to reorder render layers in runtime?
This feature is in the design phase. Please comment if you have ideas
Support audio effects (similar to visual effects) that work with movie.actx
(audio context) and layer.audioNode
(of media layers).
The main reason for this feature is consistency and keyframe/function support for audio filters. I am still doing research on the web audio API, but anybody can pitch in!
See also:
When examples/introduction/export.html is run in firefox, the video layer has no sound (even though the audio layer does).
Steps to reproduce:
npm run assets && npm start
to start the dev server Implement a currentTime
getter in layer.Base
that returns the time relative to the layer, or undefined
if it is not active or not attached to a movie
Implement a currentTime
getter in effect.Base
that returns its targets currentTime
or undefined
if not attached to a target
We need to show off Vidar's key features by making a video with it. Write a script that creates a video, exports it and downloads it. After this issue is resolved, we'll add a gif of the video to the readme (#31).
Some features you could include:
A layer duration of undefined
should result in whatever its movie's duration is currently
Steps to reproduce:
new vd.effect.Pixelate(1)
to the movieHello,
There is an issue when we rotate an image, the image get cropped.
When we apply Matrix#rotate(a) on an image, the image get cropped, is there any solution so we keep showing all of the image while rotating it ?
It seems that video exporting doesn't work except the case when you disable audio with options {audio:false}
.
I investigated that problem is related to this.actx.createMediaStreamDestination()
, it seems like there is silence in it in any case, also in case there is audio\video layer, and as you wrote, Chrome doesn't record silence
, so, empty blob is produced. Workaround that I found is to pre-create MediaStreamDestination in Movie constructor and use it when attaching the media like this
this._source = movie.actx.createMediaElementSource(this.media)
this.source.connect(movie.actxdst) // <--- precreated actx.createMediaStreamDestination()
this.source.connect(movie.actx.destination)
It seems working, but I'm not sure this is legal.
P.S. I also had to change the example to start exporting on user click, not on window loading to make it work stable, but it's a known issue as I understood.
P.P.S. I was able to reproduce this bug on Edge
/Chrome
/Yandex Browser
/Opera on Android
. With this workaround exporting seems to work in all that browsers
Hello, i tried to export the video to mp4 but i couldnt, is there any way to do that ?
TODO: research making these npm packages
Packages will be sets of effects and/or layers
vd
from an intermediate packages/index.js, and then imported and individually exported directly into index.js.packages/index.js should look something like this:
// Package names must not conflict with other exported properties from **index.js**.
export {default as packageA} from "./package-a.js";
export {default as packageB} from "./package-b.js";
Then, add this in index.js:
// (other imported modules omitted here)
import * as packages from "./packages/index.js";
export default {
// (other exported properties omitted here)
...packages // avoid having to call packages with `vd.packages.packageName`, which is gross
};
A final use example could look like this:
let effect = new vd.packageName.effect.EffectName();
TODO: registering user packages.
Make the val
function in the util
module support callback functions (along with simple values and keyframe objects).
Due to the inability to share a layer's canvas among 2D and webgl contexts, I think we should assign a canvas to each effect.
When rendering, use the canvas of the previous effect (or the last frame's layer output, if it's the first effect) as input and its own canvas as output. When the final effect has executed, copy the output from its canvas to the layer's canvas.
The new effect apply
method signature should probably look like this:
apply(source, layer)
where source
is the previous effect
Add more effects. You can check the list of currently implemented effects in src/effect.js and then start with one of these:
Instead of erroring No keyframes located after or at time TIME
, repeat the last keyframe for all times after it.
The user should be able to set disabled
to true
on effects and layers to disable them
This effect takes a "mask" layer as input, and multiply the transparency of every pixel in the target by the brightness of the mask pixel (white pixel = transparency doesn't change, black pixel = fully transparent).
To utilize hardware accleration, you should subclass the Shader
effect.
Relevant code:
Hello, when i try to add objects to a layer like a text, it become pixelized, also when i add an image with a width and height lower than the width and the height of the canvas, the canvas dont show the full image, otherwise if i add a bigger image, canvas crop it.
Example
const createMediaEl = (src, type) => {
const el = document.createElement(type);
el.src = src;
return el;
};
const videoElement = createMediaEl("https://www.w3schools.com/html/mov_bbb.mp4", "video");
const layer1 = new vd.layer.Video(2, videoElement, {
mediaStartTime: 3
});
layer1.mediaStartTime = 8;
In proxy mediaStartTime === 8, but on play i see mediaStartTime === 0
Looks like Safari browser doesn't have AudioContext support.
But there is polyfill https://github.com/chrisguttandin/standardized-audio-context
Blocked by #15
Because function properties don't have to be deterministic (for a given time in the movie), yet they should be constant for each frame, they should be cached per-frame. This way, multiple references to one property won't yield different results in one frame.
Change
movie.addLayer(new mv.layer.Base(0, 4));
to
movie.addLayer(0, new mv.layer.Base(4));
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.