Git Product home page Git Product logo

ffmpeg-python's Introduction

ffmpeg-python: Python bindings for FFmpeg

CI

ffmpeg-python logo

Overview

There are tons of Python FFmpeg wrappers out there but they seem to lack complex filter support. ffmpeg-python works well for simple as well as complex signal graphs.

Quickstart

Flip a video horizontally:

import ffmpeg
stream = ffmpeg.input('input.mp4')
stream = ffmpeg.hflip(stream)
stream = ffmpeg.output(stream, 'output.mp4')
ffmpeg.run(stream)

Or if you prefer a fluent interface:

import ffmpeg
(
    ffmpeg
    .input('input.mp4')
    .hflip()
    .output('output.mp4')
    .run()
)

Complex filter graphs

FFmpeg is extremely powerful, but its command-line interface gets really complicated rather quickly - especially when working with signal graphs and doing anything more than trivial.

Take for example a signal graph that looks like this:

Signal graph

The corresponding command-line arguments are pretty gnarly:

ffmpeg -i input.mp4 -i overlay.png -filter_complex "[0]trim=start_frame=10:end_frame=20[v0];\
    [0]trim=start_frame=30:end_frame=40[v1];[v0][v1]concat=n=2[v2];[1]hflip[v3];\
    [v2][v3]overlay=eof_action=repeat[v4];[v4]drawbox=50:50:120:120:red:t=5[v5]"\
    -map [v5] output.mp4

Maybe this looks great to you, but if you're not an FFmpeg command-line expert, it probably looks alien.

If you're like me and find Python to be powerful and readable, it's easier with ffmpeg-python:

import ffmpeg

in_file = ffmpeg.input('input.mp4')
overlay_file = ffmpeg.input('overlay.png')
(
    ffmpeg
    .concat(
        in_file.trim(start_frame=10, end_frame=20),
        in_file.trim(start_frame=30, end_frame=40),
    )
    .overlay(overlay_file.hflip())
    .drawbox(50, 50, 120, 120, color='red', thickness=5)
    .output('out.mp4')
    .run()
)

ffmpeg-python takes care of running ffmpeg with the command-line arguments that correspond to the above filter diagram, in familiar Python terms.

Screenshot

Real-world signal graphs can get a heck of a lot more complex, but ffmpeg-python handles arbitrarily large (directed-acyclic) signal graphs.

Installation

Installing ffmpeg-python

The latest version of ffmpeg-python can be acquired via a typical pip install:

pip install ffmpeg-python

Or the source can be cloned and installed from locally:

git clone [email protected]:kkroening/ffmpeg-python.git
pip install -e ./ffmpeg-python

Note: ffmpeg-python makes no attempt to download/install FFmpeg, as ffmpeg-python is merely a pure-Python wrapper - whereas FFmpeg installation is platform-dependent/environment-specific, and is thus the responsibility of the user, as described below.

Installing FFmpeg

Before using ffmpeg-python, FFmpeg must be installed and accessible via the $PATH environment variable.

There are a variety of ways to install FFmpeg, such as the official download links, or using your package manager of choice (e.g. sudo apt install ffmpeg on Debian/Ubuntu, brew install ffmpeg on OS X, etc.).

Regardless of how FFmpeg is installed, you can check if your environment path is set correctly by running the ffmpeg command from the terminal, in which case the version information should appear, as in the following example (truncated for brevity):

$ ffmpeg
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)

Note: The actual version information displayed here may vary from one system to another; but if a message such as ffmpeg: command not found appears instead of the version information, FFmpeg is not properly installed.

When in doubt, take a look at the examples to see if there's something that's close to whatever you're trying to do.

Here are a few:

jupyter demo

deep dream streaming

See the Examples README for additional examples.

Custom Filters

Don't see the filter you're looking for? While ffmpeg-python includes shorthand notation for some of the most commonly used filters (such as concat), all filters can be referenced via the .filter operator:

stream = ffmpeg.input('dummy.mp4')
stream = ffmpeg.filter(stream, 'fps', fps=25, round='up')
stream = ffmpeg.output(stream, 'dummy2.mp4')
ffmpeg.run(stream)

Or fluently:

(
    ffmpeg
    .input('dummy.mp4')
    .filter('fps', fps=25, round='up')
    .output('dummy2.mp4')
    .run()
)

Special option names:

Arguments with special names such as -qscale:v (variable bitrate), -b:v (constant bitrate), etc. can be specified as a keyword-args dictionary as follows:

(
    ffmpeg
    .input('in.mp4')
    .output('out.mp4', **{'qscale:v': 3})
    .run()
)

Multiple inputs:

Filters that take multiple input streams can be used by passing the input streams as an array to ffmpeg.filter:

main = ffmpeg.input('main.mp4')
logo = ffmpeg.input('logo.png')
(
    ffmpeg
    .filter([main, logo], 'overlay', 10, 10)
    .output('out.mp4')
    .run()
)

Multiple outputs:

Filters that produce multiple outputs can be used with .filter_multi_output:

split = (
    ffmpeg
    .input('in.mp4')
    .filter_multi_output('split')  # or `.split()`
)
(
    ffmpeg
    .concat(split[0], split[1].reverse())
    .output('out.mp4')
    .run()
)

(In this particular case, .split() is the equivalent shorthand, but the general approach works for other multi-output filters)

String expressions:

Expressions to be interpreted by ffmpeg can be included as string parameters and reference any special ffmpeg variable names:

(
    ffmpeg
    .input('in.mp4')
    .filter('crop', 'in_w-2*10', 'in_h-2*20')
    .input('out.mp4')
)

When in doubt, refer to the existing filters, examples, and/or the official ffmpeg documentation.

Frequently asked questions

Why do I get an import/attribute/etc. error from import ffmpeg?

Make sure you ran pip install ffmpeg-python and not pip install ffmpeg (wrong) or pip install python-ffmpeg (also wrong).

Why did my audio stream get dropped?

Some ffmpeg filters drop audio streams, and care must be taken to preserve the audio in the final output. The .audio and .video operators can be used to reference the audio/video portions of a stream so that they can be processed separately and then re-combined later in the pipeline.

This dilemma is intrinsic to ffmpeg, and ffmpeg-python tries to stay out of the way while users may refer to the official ffmpeg documentation as to why certain filters drop audio.

As usual, take a look at the examples (Audio/video pipeline in particular).

How can I find out the used command line arguments?

You can run stream.get_args() before stream.run() to retrieve the command line arguments that will be passed to ffmpeg. You can also run stream.compile() that also includes the ffmpeg executable as the first argument.

How do I do XYZ?

Take a look at each of the links in the Additional Resources section at the end of this README. If you look everywhere and can't find what you're looking for and have a question that may be relevant to other users, you may open an issue asking how to do it, while providing a thorough explanation of what you're trying to do and what you've tried so far.

Issues not directly related to ffmpeg-python or issues asking others to write your code for you or how to do the work of solving a complex signal processing problem for you that's not relevant to other users will be closed.

That said, we hope to continue improving our documentation and provide a community of support for people using ffmpeg-python to do cool and exciting things.

Contributing

ffmpeg-python logo

One of the best things you can do to help make ffmpeg-python better is to answer open questions in the issue tracker. The questions that are answered will be tagged and incorporated into the documentation, examples, and other learning resources.

If you notice things that could be better in the documentation or overall development experience, please say so in the issue tracker. And of course, feel free to report any bugs or submit feature requests.

Pull requests are welcome as well, but it wouldn't hurt to touch base in the issue tracker or hop on the Matrix chat channel first.

Anyone who fixes any of the open bugs or implements requested enhancements is a hero, but changes should include passing tests.

Running tests

git clone [email protected]:kkroening/ffmpeg-python.git
cd ffmpeg-python
virtualenv venv
. venv/bin/activate  # (OS X / Linux)
venv\bin\activate    # (Windows)
pip install -e .[dev]
pytest

Special thanks

Additional Resources

ffmpeg-python's People

Contributors

0x3333 avatar 153957 avatar 372046933 avatar akolpakov avatar apatsekin avatar cclauss avatar depau avatar jacotsu avatar jdlh avatar kkroening avatar komar007 avatar kylemcdonald avatar laurentalacoque avatar ljhcage avatar magnusvmt avatar nitaym avatar noahstier avatar raulpy271 avatar revolter avatar rping avatar tirkarthi avatar xdimgg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ffmpeg-python's Issues

complex filter only?

Hi!

First, this is a great project for compose complex filter graph.

For my current use (generating thumbnail from video) the project does not support option like -ss or -vf
( I was doing -vf 'select=eq(n/,999) or -ss '55.00''

the filter_() methods seems to output complex filter only?

Duration flag

I want to use the duration flag:
-t duration

I'm rendering a video using a .png as input, like this:

ffmpeg -loop 1 -framerate 23.98 -t 60 -i "background.png" output.mp4

Is there a way to accomplish this? Thanks for the help.

Numpy filters

It should be possible to implement custom pixel-filtering in python as in the following example:

from ffmpeg.nodes import operator
import ffmpeg

@operator()
def my_custom_filter(parent)
    def process_frame(frame):
        width, height = frame.shape[:2]
        for x in range(width):
            for y in range(height):
                pixel = frame[width, height, :]
                r, g, b = pixel
                pixel.flat = g, r, b  # swap red/green
    return ffmpeg.map_frames_numpy(parent, process_frame)

ffmpeg \
    .input('in.mp4') \
    .my_custom_filter() \
    .output('out.mp4') \
    .run()

This is a simple example of swapping red/green values using numpy but it could be as sophisticated as you want. Here are some ideas:

  • render image with OpenGL shader
  • process image with tensorflow and render information/overlays

The ffmpeg wrapper needs to spawn and coordinate multiple ffmpeg processes and connect their input/output streams using pipes, but should be doable.

Add examples

An examples/ directory should be added with a set of commonly encountered use-cases. Some examples of possible examples (yo dawg):

  • Convert between video formats
  • Framerate transcoding
  • Piping to/from stdout/stdin
  • Complex filtering
  • Live-streaming (e.g. RTMP)
  • Loading video from webcam into TensorFlow with on-the-fly pre-processing (image stabilization, etc.)
  • Webcam + Social-media-feed -> Preprocessing -> TensorFlow -> Live-stream

Add 'alsa' audio to /dev/videoX input

I'm trying to add 'alsa' audio input to an stream from /dev/videoX device.

I can run both separately using ffmpeg-python, but I'm not able to join them in the same stream. I need to record video from /dev/videoX and audio from an 'alsa' device and stream both of them at the same time as an audio+video to a http address (I can do it with only video or only audio).

Is it possible to do this?

Outgoing edges of nodes should be sorted based on when a call `.stream()` created them

i = ffmpeg.input("f1.mp4")
ref = ffmpeg.input("f2.mp4")
# BTW `filter_multi_output` is not in __all__, needs #65
s2ref = ffmpeg.filter_([i, ref], "scale2ref").node
scaled = s2ref[0]
ref2 = s2ref[1]

https://ffmpeg.org/ffmpeg-all.html#scale2ref

Screenshot_from_2018-01-26_14-33-23.png

You would expect the output of scale2ref labeled 0 (scaled) to be the first output of scale2ref (because .stream() was called first), and 1 (ref2) to be the second. However, this doesn't always happen: it's totally up to topo_sort().

I'll try to fix it and send a PR.

Does not work with Python 3

Perhaps one should add a Python 2 restrictor on the setup.py? Or make use of six for supporting both? Or port to Python 3?

(cpython36) benjolitz-laptop:~/software$ python
Python 3.6.0 (default, Dec 24 2016, 08:01:42)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ffmpeg
>>> i = ffmpeg.input('/Users/BenJolitz/Downloads/b195bd7c-aa7b-44c4-83a5-2c12e77a8784.mov')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/BenJolitz/.virtualenvs/cpython36/lib/python3.6/site-packages/ffmpeg/_ffmpeg.py", line 15, in input
    return InputNode(input.__name__, filename=filename)
  File "/Users/BenJolitz/.virtualenvs/cpython36/lib/python3.6/site-packages/ffmpeg/nodes.py", line 35, in __init__
    super(InputNode, self).__init__(parents=[], name=name, *args, **kwargs)
  File "/Users/BenJolitz/.virtualenvs/cpython36/lib/python3.6/site-packages/ffmpeg/nodes.py", line 14, in __init__
    self._update_hash()
  File "/Users/BenJolitz/.virtualenvs/cpython36/lib/python3.6/site-packages/ffmpeg/nodes.py", line 26, in _update_hash
    my_hash = hashlib.md5(json.dumps(props, sort_keys=True)).hexdigest()
TypeError: Unicode-objects must be encoded before hashing
>>>

Pass-throughs should not use `-map`

This currently produces the wrong ffmpeg args:

>>> ' '.join(ffmpeg
...     .merge_outputs(
...         (ffmpeg
...             .input('in1.mp4')
...             .output('out1.mp4')
...         ),
...         (ffmpeg
...             .input('in2.mp4')
...             .output('out2.mp4')
...         )
...     )
...     .get_args()
... )
u'-i in1.mp4 -i in2.mp4 out1.mp4 -map [1] out2.mp4'

The -map should not be there, since no -filter_complex param is needed.

Should be fairly easy to fix by omitting -map for passthroughs and generating output params in the correct order.

See test_multi_passthrough in feature/17 (or master once #17 is merged).

Graph visualization

Finish code from #17 for visualizing graphs.

Basically just need to determine the right way to handle import errors for graphviz, since I don't want graphviz to be a requirement for ffmpeg-python.

Passing flags to ffmpeg

I would like to pass on the following flags to ffmpeg

.-nostdin 
 -loglevel error

Is it possible ?
The nostdin flag is required because ffmpeg swallows stdin. More info here.
The loglevel parameter would help a lot in reducing the verbosity of the ffmpeg output

Audio Support

I have used filter_complex on an video as below, but the audio stream is lost.
I would like to be able to adjust the audio and video playback rate, suggestions?

import ffmpeg
stream = ffmpeg.input('input.mp4')
stream = ffmpeg.filter_(stream, 'fps', fps=20)
stream = ffmpeg.filter_(stream, 'crop', 'iw*.63:ih*.63:iw-(iw*.64):ih-(ih*.833)')
stream = ffmpeg.filter_(stream, 'setpts', '0.8333334*PTS')
stream = ffmpeg.output(stream, 'output.mp4')
ffmpeg.run(stream)

With ffmpeg I can mux back in the audio (also with a speedup);
-filter_complex 'crop=iw*.63:ih*.63:iw-(iw*.64):ih-(ih*.833)[vid];[vid]fps=20[vid2];[vid2]setpts=0.8333334*PTS[v];[0:a]atempo=1.2[a]' -map '[v]' -map '[a]'

edit memory video files

Is it possible to read an in memory video file, process it, and then write back to memory.
I need to combine a video file and a audio file from aws s3 and write it back to s3. I am trying to avoid saving the video in disk first.

thanks.

Playing sound causes a crash?

With ffpyplayer on Windows 10 Python 3.6, I try running the following program on ogg or wav files:

from ffpyplayer.player import MediaPlayer

vid = 'arcade/examples/sounds/phaseJump1.ogg'
player = MediaPlayer(vid)
val = ''

while val != 'eof':
    frame, val = player.get_frame()

print("Done")

The sound is played, but at the end of the sound python crashes and I get: Process finished with exit code -1073741819 (0xC0000005)

Is there a way to read from pipes?

Hi,
I know that FFMPEG can send decoded/demuxed data from the stream into pipe for others to read. I've done it on C# and was wondering weather or not this library would have the ability to acquire data from the pipe

send help plz

Add logger to ffmpeg call

It would be very useful to have a logger for the ffmpeg call.
It would be easier to integrate in a system of loggers.

Is it possible to have in the future?

Obtain stream information

Hello,
Is there a way to get the size (width and height) of an input video stream?
Thanks for the good work!

ffmpeg is guessing incorrect format based on -i argument

It appears that ffmpeg is guessing the audio format incorrectly if the -f option appears after the -i option in the ffmpeg command. The following command works:

ffmpeg -f s16le -ac 1 -ar 16000  -i /home/bstaley/Downloads/20160509132350369-data.raw /tmp/test3.mp3

Unfortunately, ffmpge-python generates the equivilant:

ffmpeg -i /home/bstaley/Downloads/20160509132350369-data.raw -f s16le -ac 1 -ar 16000 /tmp/test3.mp3

which results in:

[rawvideo @ 0x13ba800] Invalid pixel format.

Is there a way for force the placement of -f relative to -i?

Unclear way to specify output file parameters

Hi,
I'm noticing that there's no clear way to specify how the output formats should be specified.
I need apply some filters to different streams (scale them) and output multiple formats at the same time.

I need to specify the output codecs and quality parameters, and I'm uncertain on how that can be done.

filter_complex should not be used for basic pipelines

I'm trying to use ffmpeg-python for a very simple task, concatenating many mp3s together.
I set up my pipeline like so:

import ffmpeg
import os
import os.path

audio_files = sorted(os.listdir("."))[:5]
audio_inputs = list(map(lambda audio_file: ffmpeg.input(audio_file), audio_files))
ffmpeg.concat(*audio_inputs).output("concat_test.mp3").run()

Which produces the command:

ffmpeg -i "file-0" -i "file-1" -i "file2" -i "file3" -i "file4" -filter_complex [0][1][2][3][4]concat=n=5[s0] -map [s0] concat_test.mp3

This fails with the error message:
Stream specifier '' in filtergraph description [0][1][2][3][4]concat=n=5[s0] matches no streams.

However if I remove the -filter_complex and -map flags then the command runs as expected.

How to write frame number with drawtext

hello I'm trying to write the frame number on a video and would normally do it by text="%{n}" in ffmpeg
when i try that in ffmpeg-python i just get it literally written like that, with escape_text=True or escape_text=False.
so how would i actually do it? my whole command chain is this

ffmpeg.input(inputz)
      .drawtext(text="%{n}",start_number=0,
            fontfile="/Users/jamieparry/PyInstaller-2.1/kivthing/dist"
            "/kivthing/kivy_install/data/fonts/DroidSans-Bold.ttf",
            fontcolor="red",x=40,y=100,timecode="00:00:00:00",timecode_rate=25,
            fontsize="64",escape_text=True)
      .output(os.path.join(folderplace,"ff_outputz",justname))
      .run(overwrite_output=True)

Missing documentation

Hello @kkroening,
I was going to make something like this for a work project, but then found your module.
I gave a quick look and it seems pretty nice, but I see a only a few filters are supported and there's no documentation

Are you going to write any documentation for this and add more filters?

I may submit PRs to add a couple of them, I need effects like scale (zoom), sepia, greyscale and stuff like that, which I don't see.

Run non-interactively

Currently ffmpeg will ask you questions, which freezes the process while it waits for response. That makes it problematic to use on the web.

filter_ parsing error

When running version 1.6 the following filter definition works;

stream = ffmpeg.filter_(stream, 'crop', 'iw*.63:ih*.63:iw-(iw*.64):ih-(ih*.833)')

But if I run latest f8409d4 I get a parsing error returned by ffmpeg;
[Parsed_crop_1 @ 0x4c86340] [Eval @ 0x7ffee8b86310] Invalid chars ':ih*.63:iw-(iw*.64):ih-(ih*.833)' at the end of expression 'iw*.63:ih*.63:iw-(iw*.64):ih-(ih*.833)'
Error when evaluating the expression 'iw*.63:ih*.63:iw-(iw*.64):ih-(ih*.833)'

You can see that FFMpeg is rejecting part of the string, I guess the escaping is broken?

Upload to static file in flask project

def create():
    if request.method == 'POST':
            file = request.files["video"]
            stream = ffmpeg.input(file)
            stream = ffmpeg.hflip(stream)
            stream = ffmpeg.output(stream, app.root_path + '/' + app.config['UPLOAD_FOLDER'] + '/videos/dd.mp4')
            ffmpeg.run(stream)

-filter_complex: No such file or directory (error from ffmpeg)

Any solution for my problem? I try change directory and check directory before pass a path but error still same.

audio not copying ove when using concat and/or overlay

first off, thanks for this great piece of software. It really does make working with ffmpeg way clearer.

I'm attempting to concat two videos and then stick an overlay on them. The video details are below in case they help.

Basically video 1 is a leader clip with audio stream that is blank, video 2 is actual content with audio stream and overlay image is transparent png

if i run as described below, Video renders fine, but no audio.

To narrow down the issue, I took out overlay and concat. (just resized video)
audio copied over fine.

when adding the concat back in, i lose audio. i discovered it's because i need to pass the kwarg a=1 but after i do that, the encoding process hangs. probably because the audio streams are different formats.

I saw this issue #26 which seems to indicate ffmpeg-python is currently missing audio options.

Am I SOL here? or is there some way I can accomplish what i want. i.e.:

  • concatenate two videos together with audio
  • png overlay applied.

if i can get just the audio from video 2, that would be ok as well.

thanks in advance

p.s on a side note, how am i supposed to feed multiple map parameters into the ffmpeg-python? and how do i identify the streams? the ffmpeg syntax does something like 0:0 but output from ffmpeg-python has '[s0]'

out_config = {
    'x264opts': x264_params,
    'c:a': 'ac3',
    'b:a': '160k',
    's': '480x272',
    #'map': '[s0:1]',
    #'acodec': 'copy',
}

    ffmpeg
                .concat(
                    ffmpeg.input(target_leader_file),
                    ffmpeg.input(target_input_file),
                    a=1,
                    unsafe=True
                )
                #.overlay(ffmpeg.input(target_watermark_file))
                .output(target_output_file, **out_config)
                .overwrite_output()

1st video:

  Metadata:
    major_brand     : qt
    minor_version   : 537199360
    compatible_brands: qt
    creation_time   : 2016-05-09T14:58:08.000000Z
    com.apple.finalcutstudio.media.uuid: C23F6D00-710C-4D81-A9FD-A61C4DF24C3C
  Duration: 00:00:03.04, start: 0.000000, bitrate: 43535 kb/s
    Stream #0:0(eng): Video: prores (apcn / 0x6E637061), yuv422p10le(bt709, progressive), 1920x1080, 41965 kb/s, SAR 1:1 DAR 16:9, 29.97 fps, 29.97 tbr, 2997 tbn, 2997 tbc (default)
    Metadata:
      creation_time   : 2016-05-09T14:58:08.000000Z
      handler_name    : Apple Alias Data Handler
      encoder         : Apple ProRes 422
      timecode        : 01:00:00:00
    Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s (default)
    Metadata:
      creation_time   : 2016-05-09T14:58:08.000000Z
      handler_name    : Apple Alias Data Handler
    Stream #0:2(eng): Data: none (tmcd / 0x64636D74), 0 kb/s (default)
    Metadata:
      creation_time   : 2016-05-09T14:58:08.000000Z
      handler_name    : Apple Alias Data Handler
      timecode        : 01:00:00:00

2nd video:

  Duration: 00:00:43.07, start: 0.511600, bitrate: 24045 kb/s
  Program 1
    Stream #1:0[0x1011]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(top first), 1920x1080 [SAR 1:1 DAR 16:9], 29.97 fps, 59.94 tbr, 90k tbn, 59.94 tbc
    Stream #1:1[0x1100]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 256 kb/s

Add a way to pass arbitrary arguments to ffmpeg.run()

Great work!

I'd like to pass -loglevel quiet to ffmpeg.run.
I've tried this but it fails:

>>> stream = ffmpeg.nodes.GlobalNode(stream, 'loglevel', 'quiet')
>>> ffmpeg.run(stream)
AssertionError: Unsupported global node: loglevel(quiet)

I've used ffmpeg.run(stream, cmd=['ffmpeg', '-loglevel', 'quiet']) as a workaround, but it looks like GlobalNode is very restricted. Also multiple GlobalNode can't be chained.

How do you record the screen?

I've been looking for python bindings for a screen recorder, and I can't seem to get anywhere near 60fps.

PIL.ImageGrab is about 8-9 FPS
mss.grab is about 15-16 FPS

This is on an i7 2600K as well... so -- really surprised how slow those two are.

Unable to find a suitable output format for 'pipe'

Heyo, I figured out how to send STDOUT to pipe but for some reason it exits with error

Screenshots:

Output:
https://imgur.com/6PGWSDQ

Code:

class` Stream:
    run = False

    def __init__(self, camid):
        camUtil = CameraUtil()
        self.camid = camid
        self.streamurl = camUtil.get_stream_from_id(self.camid)['streamURL']
        print(self.streamurl)
        self.args = ffmpeg.input(self.streamurl)
        self.args = ffmpeg.output(self.args, "pipe")
        self.args = ffmpeg.get_args(self.args)
        print(self.args)
        self.pipe = subprocess.Popen(['ffmpeg'] + self.args, stdout=subprocess.PIPE)

    def dep_stream(self):
        self.run = True
        self.pipe
        while self.run:
            output = self.pipe.communicate()
            print(output)

Please advise

Sample rate (-ar) option

Could we have a sample rate option to modify the sample rate of the output file?

eg. Converting a 48000kHz audio file to 44100kHz.
ffmpeg.input('in.mp4').encode_(samplerate=44100).output('out.mp4).run()

Ability to track progress of an ffmpeg command

Is there a way to track a progress after running an ffmpeg?

For example, below is what i would like to do:

import ffmpeg
ffmpeg.input('test.mp4').output('frame_%06d.jpg').run()

This command writes each frame of a video as an image to disk.

At the very least it would be great if we could see the output generated by running ffmpeg on commandline ffmpeg -i test.mp4 %06d.png -hide_banner

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'workflow_video_01.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    creation_time   : 2036-02-06 06:28:16
    encoder         : HandBrake 0.10.2 2015060900
  Duration: 00:50:57.00, start: 0.000000, bitrate: 7040 kb/s
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 7038 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc (default)
    Metadata:
      creation_time   : 2036-02-06 06:28:16
      handler_name    : VideoHandler
Output #0, image2, to 'dump/%06d.png':
  Metadata:
    major_brand     : mp42
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf56.40.101
    Stream #0:0(und): Video: png, rgb24, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2036-02-06 06:28:16
      handler_name    : VideoHandler
      encoder         : Lavc56.60.100 png
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> png (native))
Press [q] to stop, [?] for help
frame=  677 fps= 59 q=-0.0 Lsize=N/A time=00:00:27.08 bitrate=N/A  

Any ideas on how to do this?

Run in background or asynchronously

Is there any possibility to make the ".run()" method run in background or asynchronously? Without directly using threads nor similar options (twisted, etc).

If not, is there are option to stop the ".run()" method? Or should I use threads or similar to achieve this? I want to avoid the threading because the code will run on a simple RPi.

Thanks in advance

Support non-Python characters in node arguments

I am new to this great project and am very much enjoying the ease of use, especially the complex filtering api. However, it is unclear to me how to multiplex multiple inputs as so:
ffmpeg -i audio.m4a -i video.mp4 -c:v libx264 -c:a aac output.mp4
Is this possible with this library?

Replace "filter_" with something that tries to resolve the filter automatically

I think we could change the way the module works by getting somewhere the list of available filters and mapping them manually to a catch-all callable that that generates filter args automatically.

I haven't tried doing it already because there are some things to discuss, like how to distinguish single-input filters from multi-input, and how to decide whether a filter is acceptable or not.

Write better documentation

As the project is expanding I'm seeing a lot of unclarity in the code.

You now have some objects that are children of Node, some that are children of Stream. I read everywhere outgoing_edge, incoming_edge, node, but it's unclear what they do and how the graph is resolved: it's all buried in Python syntactic sugar, which is not a bad thing if properly documented.

Could you please explain the current project structure, what each object exactly does, how the graph paths are visited to generate the command line? I would be happy to help you with the project if only I could understand what is going on under the hood.

Thank you!

I'm trying to implement the audio thing (#26) but I can't figure out how to add an outgoing edge to a stream and represent it as :a etc.

Text expansions

Hey guys,

I'm trying to evaluate expression like this:
stream.drawtext(text=r"%%{eif\\:n+%s\\:d\\:4}" % start_frame_number, ...)
but I always get eif related errors. Something funky is going on with escaping characters in this case.

Anyone else hit that? How should I format the expression for it to work?

Thanks,
kk

Support setting `bitrate` for output files

Simple use case :

stream = ffmpeg.input('musicvideo.mp4')
stream = ffmpeg.output(stream, 'musicaudio.mp3', bitrate='320k')
ffmpeg.run(stream)

Also, consider adding an option to explicitly specify whether to use VBR (variable-bitrate) or CBR (constant-bitrate)

Changing output file type

I can't figure out how to do something with this. I want to do the equivalent of the following ffmpeg command.

ffmpeg -i input.avi -map 0:0 -map 0:1 -c:v libx264 -c:a aac output.mp4

I have tried doing the following

import ffmpeg
input = ffmpeg.input('input.avi')
output = ffmpeg.output(input, 'output.mp4', map 0:0, map 0:1, c:v libx264, c:a aac)

But that last command gets the following output

>>> output = ffmpeg.output(input, 'input.mp4', c:v libx264, c:a aac)                                              
  File "<stdin>", line 1
    output = ffmpeg.output(input, 'input.mp4', c:v libx264, c:a aac)                                              
                                                 ^
SyntaxError: invalid syntax

Any help would be great.

Fails on non ascii filename on windows

unable to process filenames with Unicode characters in them like é

Traceback (most recent call last):
  File "C:\Users\abdullah\src\python\ffmpeg-dir-conv\bulkc.py", line 46, in <module>
    main()
  File "C:\Users\abdullah\src\python\ffmpeg-dir-conv\bulkc.py", line 33, in main
    s = ffmpeg.input(f)
  File "C:\Python27\lib\site-packages\ffmpeg\_ffmpeg.py", line 27, in input
    return InputNode(input.__name__, kwargs=kwargs).stream()
  File "C:\Python27\lib\site-packages\ffmpeg\nodes.py", line 173, in __init__
    kwargs=kwargs
  File "C:\Python27\lib\site-packages\ffmpeg\nodes.py", line 124, in __init__
    super(Node, self).__init__(incoming_edge_map, name, args, kwargs)
  File "C:\Python27\lib\site-packages\ffmpeg\dag.py", line 119, in __init__
    self.__hash = self.__get_hash()
  File "C:\Python27\lib\site-packages\ffmpeg\dag.py", line 111, in __get_hash
    hashes = self.__upstream_hashes + [self.__inner_hash]
  File "C:\Python27\lib\site-packages\ffmpeg\dag.py", line 108, in __inner_hash
    return get_hash(props)
  File "C:\Python27\lib\site-packages\ffmpeg\_utils.py", line 63, in get_hash
    repr_ = _recursive_repr(item).encode('utf-8')
  File "C:\Python27\lib\site-packages\ffmpeg\_utils.py", line 55, in _recursive_repr
    kv_pairs = ['{}: {}'.format(_recursive_repr(k), _recursive_repr(item[k])) for k in sorted(item)]
  File "C:\Python27\lib\site-packages\ffmpeg\_utils.py", line 55, in _recursive_repr
    kv_pairs = ['{}: {}'.format(_recursive_repr(k), _recursive_repr(item[k])) for k in sorted(item)]
  File "C:\Python27\lib\site-packages\ffmpeg\_utils.py", line 51, in _recursive_repr
    result = str(item)
  File "C:\Python27\lib\site-packages\future\types\newstr.py", line 102, in __new__
    return super(newstr, cls).__new__(cls, value)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 21: ordinal not in range(128)

filter_ for volumedetect

Hi everyone, really sorry I opened an issue but I'm desperate now :(, how can I use filter volumedetect
I've been following a tutorial and I ended up with this code

`ffmpeg
        .filter_('volumedetect')
        .input(in_filename, **input_kwargs)
        .output('-', format='s16le', acodec='pcm_s16le', ac=1, ar='16k')
        .overwrite_output()
        .compile()`

I'm really new to this so, like I saw the code and saw that filter needs a stream and as I was researching I notice that .input(in_filename) gives me that stream but how I use it here, and the argument

-f null

how do I add it too, I just need to get the db from de audio. Any help thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.