Git Product home page Git Product logo

cloudconvert-node's People

Contributors

3ximus avatar chriszieba avatar dependabot[bot] avatar joostverdoorn avatar josiasmontag avatar juho avatar knorpelsenf avatar qwicigentech avatar stanleysathler avatar stefansundin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudconvert-node's Issues

Installation problem

include requires: fs, net with version to package.json. I am failing to install a module:
ERROR in ./node_modules/cloudconvert/lib/process.js
Module not found: Error: Can't resolve 'fs' in '/Users/air/Desktop/Projects/general/noplag-2018/source/node_modules/cloudconvert/lib'
@ ./node_modules/cloudconvert/lib/process.js 5:9-22
@ ./node_modules/cloudconvert/lib/api.js
In this case, the fs 0.0.1-security is installed.

created process = null ?

Hi,
when I create a process like the following , the returned process is just null:

var fs = require('fs'),
    cloudconvert = new (require('cloudconvert'))('MY-API-KEY');
 
cloudconvert.createProcess({
    "inputformat": "flv",
    "outputformat": "mp4"
}, function(process) {
     // process = null
});

How to debug ? please?

Inconsistent docs

The website with the socket API specs defines four event types for jobs and tasks:

  • created
  • updated
  • finished
  • failed.

In contrast, the README file declares five possible event types:

  • created
  • updated
  • finished
  • error
  • deleted.

This is inconsistent. Which version is correct?

No other API client software implements the websocket protocol, therefore I cannot find a third reference. Please fix this so I can properly type the events when working on #46.

Params to disable extracting .zip file content to another subdirectory ?

Is there any option disable extracting .zip file content to another subdirectory ?

Ex:

// Archive.zip contains the following structure
- Folder1
  - File1.jpg
  - File2.jpg
  - File3.jpg

When I run my code, The content of my zip file are under
Archive > Folder1 > content here.

What I want is that the content is directly under Archive.
Is it doable ?

Here's a portion of my code

let params = {
                    mode: "extract",
                    input: {
                        s3: {
                            accesskeyid: 'xxxxxxxxxxxxx',
                            secretaccesskey: 'xxxxxxxxxxxxxxxxxxxxx',
                            bucket: "xxxx-files-xxxx"
                        }
                    },
                    file: srcKey,
                    output: {
                        s3: {
                            accesskeyid: 'xxxxxxxxxxxxxxxxx',
                            secretaccesskey: 'xxxxxxxxxxxxxxxxxxxx',
                            bucket: 'xxxx-assets-xxxx',
                            path: "zfn2vN/qsYQDU/Archive/"
                        }
                    }
                };

                process.start(params);

Problem with create import/upload jobs

Hi,
I'm playing with the new v2 api and I'm trying to piece things together.
Problem is when I'm trying to upload a local file, there seems to be a problem.
On my dashboard, I see the job in "waiting" but nothing else happens.
Especially since the file is relatively small in size so the "upload" task should be quick.

Here's the code I'm doing.

const job = await cloudConvert.jobs.create({
            tasks: {
                import: {
                    operation: 'import/upload',
                    file: "index.html"
                }
            }
        });

I've stripped the convert/export task since the issue seems to be related to the import part.
the "index.html" file is in the root of the folder I'm working on.

By the way, the same file works just fine when I use the v1.

Doesn't work behind a corporate proxy

Please add options to set a proxy. I've already added the environment variables HTTP_PROXY, HTTPS_PROXY with no effect.

executing this:

cloudconvert.convert({
    "inputformat": "eps",
    "outputformat": "svg",
    "input": "upload",
     "file": "src/input.eps"
  })
  .pipe(fs.createWriteStream('outputfile.svg'));

I get this error:

events.js:141
throw er; // Unhandled 'error' event
^
Error: connect ETIMEDOUT 137.74.133.1:443
at Object.exports._errnoException (util.js:907:11)
at exports._exceptionWithHostPort (util.js:930:20)
at TCPConnectWrap.afterConnect as oncomplete

executing this:

cloudconvert.get('/conversiontypes', {
    inputformat: 'pdf',
    outputformat: 'jpg'
}, function(err, result) {
    if (err) {
        console.log(err);
    } else {
        console.log(result);
    }
});

will log this:

{ [Error: connect ETIMEDOUT 137.74.133.1:443]
code: 'ETIMEDOUT',
errno: 'ETIMEDOUT',
syscall: 'connect',
address: '137.74.133.1',
port: 443 }

wait don't work

I try to call cloudconvert-node with this parameters:

cloudconvert.convert({
inputformat: 'html',
outputformat: 'docx',
input: 'base64',
wait: true,
download: false,
file: base64.fileBase64,
filename: ${tripKey}.html
})
.pipe(Write.stream(../../docs/${tripKey}.doc)
.on('finish', () => (console.log('1'))));
console.log('2');
return { url: /assets/docs/${tripKey}.doc };

console.log('2') always happens before console.log('1')

How could I make the call wait until convert is done?

//Thomas

Conversion type not supported: zip to ? (mode info)

I'd like to know if I'm doing something wrong here with me trying to extract a ".zip" file.

cloudConvert.createProcess({
            "inputformat": "zip",
            "outputformat": "*",
            "mode": "extract"
        }, (error, process) => {
            console.log(error)
      });

Please do not mind any missing "), }".
No errors when I run this code.
The problem shows on the dashboard

"Conversion type not supported: zip to ? (mode info)".

No idea what I'm doing wrong, I've tried the Api with

cloudconvert.convert({})

but since I'm using the createProcess method here, maybe I'm going about it in a wrong way.

Cheers.

Trouble with node client

I am having trouble creating a job in Node. What is odd is I can copy these tasks and API key and run them just fine in Postman.

In node I am receiving a 401 status code. I even tried generating a new key with same result. In Postman this key works.

  `import CloudConvert from 'cloudconvert';
    const cloudConvert = new CloudConvert('sandBoxAPIKey');

     async function convert(){

          try {
    
    const job = await cloudConvert.jobs.create({
        "tasks": {
            "importConvertVideo": {
                "operation": "import/url",
                "url": "https://urlToFile",
                "filename": "IMG_0090.MOV",
                "headers": {
                    "companyID": 101,
                    "projectID": 102
                }
            },
            "taskConvertVideo": {
                "operation": "convert",
                "input_format": "mov",
                "output_format": "mp4",
                "engine": "ffmpeg",
                "input": [
                    "importConvertVideo"
                ],
                "video_codec": "x264",
                "crf": 23,
                "preset": "fast",
                "audio_codec": "aac",
                "audio_bitrate": 128,
                "engine_version": "4.1.4"
            },
            "exportConvertVideo": {
                "operation": "export/url",
                "input": [
                    "taskConvertVideo"
                ],
                "inline": false,
                "archive_multiple_files": false
            }
        }
    });

} catch (error) {
    console.log("error: ", error)
}
 
 }

 convert()`

Empty Download

My Code:

process.upload( bucketBillDTA.openDownloadStream(id), id, function (err, process) {
                                if (err) {
                                    console.error('CloudConvert Process upload failed: ' + err);
                                } else {
                                    // wait until the process is finished (or completed with an error)
                                    process.wait(function (err, process) {
                                        if (err) {
                                            console.error('CloudConvert Process failed: ' + err);
                                        } else {
                                            console.log('Done: ' + process.data.message);
            
                                            // download it
                                            process.download(bucketBillDTA.openUploadStreamWithId(id + '.pdf', 'test.pdf'), null, function (err, process) {
                                                if (err) {
                                                    console.error('CloudConvert Process download failed: ' + err);
                                                } else {
                                                    console.log('Downloaded to out.jpg');
                                                }
                                            });
                                        }
            
                                    });
                                }
                            });

The process.download is always empty with 0 bytes. Why?

no signatureVersion

When I try to output on aws s3, there seems to be no way to specify the signatureVersion: 'v4'
,then i get always
CloudConvert Process failed: Error: Saving to S3 failed: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. (Code: InvalidRequest)

500 error on jobs.create() with many imports

The node SDK v2.1.3 returns a 500 when using cloudConvert.jobs.create() with 100 import tasks.
However, the job still successfully starts and completes.

Similar error also occurs when using the API directly at POST https://api.cloudconvert.com/v2/jobs

Payload to reproduce:

tasks: {

import00: { filename: 'img-00.jpg', operation: 'import/url', url: 'https://i.imgur.com/lTlExZh.jpg' },
import01: { filename: 'img-01.jpg', operation: 'import/url', url: 'https://i.imgur.com/lTlExZh.jpg' },
...
import99: { filename: 'img-99.jpg', operation: 'import/url', url: 'https://i.imgur.com/lTlExZh.jpg' },

compress: {
  operation: 'archive',
  output_format: 'zip',
  filename: '100-images.zip',
  input: ['import00', 'import01', ... 'import99']
},

export: {
  operation: 'export/url',
  input: 'compress'
}}

Support for async/await

Is there support for async await? I am using this in a lambda function and I need the container to stay alive until this process is complete because it is a synchronous conversion. I am currently using the async/await pattern everywhere else, so I need to be able to await for a process id.

No .on() calls are done

Hi,

I have the code below, and I'm only getting 'Started promise', 'PdfFromHtml 1', 'PdfFromHtml 2' and 'PdfFromHtml 3' printed to the log, and then it seems to hang forever. What am I doing wrong? I can see from the dashboard, that the PDF is created OK and that one conversion minute is invoiced per one run of the code below.

From what I understand, none of the .on() callbacks are called.

const generatePdfFromHtml = (invoiceCreationJson: any): Promise<string> => {
  const promise: Promise<string> = new Promise((resolve, reject) => {

    try {
      console.log('Started promise')

      const streams = require('memory-streams')

      const reader = new streams.ReadableStream('<html><body><H1>This is a test</H1></body></html>')

      console.log('PdfFromHtml 1')

      const writer = new streams.WritableStream()

      console.log('PdfFromHtml 2')

      const cloudconvert = new (require('cloudconvert'))('mykey')
      
      console.log('PdfFromHtml 3')

      reader.pipe(
        cloudconvert.convert({
          "inputformat": "html",
          "outputformat": "pdf"
        })
      )
      .pipe(writer)
      .on('finish', () => {
        console.log('PdfFromHtml 9')
        try {
          resolve(writer.toString())
        } catch(err) {
          reject(err)
        }
      })
      .on('downloaded', () => {
        console.log('PdfFromHtml 4')
        try {
          resolve(writer.toString())
        } catch(err) {
          reject(err)
        }
      })
      .on('error', (err: any) => {
        console.log('PdfFromHtml 5')
        reject(err)
      })
      .on('finished', () => {
        console.log('PdfFromHtml 6')
      })
      .on('progress', () => {
        console.log('PdfFromHtml 7')
      })
      .on('uploadeded', () => {
        console.log('PdfFromHtml 8')
      })
      .on('started', () => {
        console.log('PdfFromHtml 9')
      })
    } catch(err) {
      reject(err)
    }

  })
  return promise
}

Finished event never called

Event is never called. Output is always blank.

let api = 'my secret';
let cloudconvert = new (require('cloudconvert'))(api);
					
cloudconvert.createProcess({
    "mode": "combine",
    "inputformat": "pdf",
    "outputformat": "pdf"
}, (err, cloudProcess) => {
						
    console.log(cloudProcess);
    if (cloudProcess.url) {
        cloudProcess.start({
            "mode": "combine",
	"input": "download",
	"files": [							
                "https://cloudconvert.com/assets/11e13801/testfiles/pdfexample1.pdf",
                "https://cloudconvert.com/assets/11e13801/testfiles/pdfexample2.pdf"
            ],
            "outputformat": "pdf"
        })
        .on('error', (err) => {
            console.log(err);
        })
        .on('finished', (data) => {
            console.log('Done: ' + data.message);
        })
        .on('started', () => {
            console.log('Started');
        })
        .on('downloaded', () => {
            console.log('downloaded');
        }).pipe(fs.createWriteStream('outputfile.pdf'));
    }

The conversion is counted on my dashboard but nothing is written to outputfile.pdf (it is blank). Only the started event gets called, none of the other events are ever called. What is wrong?

extracting .zip file content to another subdirectory ?

Related issue is closed #36

You would need to handle that on your end using the S3 SDK.

Any ideas/advices on how I would need to handle this ?

PS: please don't close the issue this time as I might need to respond if necessary.
Otherwise I'll close the issue myself.

Cheers.

Automatically handle rate limiting

In our use of this library, we are receiving 429 / Too many requests errors.

This library should handle rate limiting automatically so that these errors are not encountered, perhaps on an opt-in basis using some config parameters.

If that is not possible, then at a minimum it should expose the X-RateLimit-Limit and X-RateLimit-Remaining headers, and provide a function that will take a job and return a delay to wait in order avoid triggering these errors so that we can implement our own queuing.

Thank you.

Referrence: https://cloudconvert.com/api/v2#errors

Multiple outputs for one job ?

Hi,
is there a way to define multiple outputs for one input file , or do I have to manually implement that kind of feature, please?

e.g.:
input:
video.mp4

output:
video_thumb.jpg
video.webm
video.ogv

Problem converting from PPTX -> JPG using v1

I'm trying to use the the api to convert a pptx into jpg images.
If I use the v1 (office engine), for some reason, it spits out "Failed to export file".

When I use the v2 embeded in the homepage of your site and I change to using the libreoffice engine, it works fine.

Am I missing something or is anything wrong with what I'm trying to do ?

Here's the whole error that I get

{
    "id": "95877575-56f0-4cc3-b3cd-a8f7baee690d",
    "url": "//veronica.infra.cloudconvert.com/process/95877575-56f0-4cc3-b3cd-a8f7baee690d",
    "expire": 1578501500,
    "percent": 0,
    "message": "Failed to export file",
    "step": "error",
    "starttime": 1578497872,
    "output": {},
    "input": {
        "filename": "88a450b2-f527-4bb5-82c5-919c114f47fa.pptx",
        "name": "88a450b2-f527-4bb5-82c5-919c114f47fa",
        "ext": "pptx"
    },
    "converter": {
        "mode": "convert",
        "format": "jpg",
        "type": "office",
        "options": {
            "resize": null,
            "resizemode": "maxiumum",
            "resizeenlarge": false,
            "grayscale": false,
            "strip_metatags": false,
            "density": "120",
            "rotate": null,
            "page_range": null,
            "quality": 70,
            "disable_alpha": true,
            "command": null,
            "hidden_slides": false,
            "output_type": "slides",
            "slides_per_handout_page": 6,
            "first_slide_number": null,
            "input_password": null,
            "templating": null
        }
    },
    "group": "image",
    "endtime": 1578497900,
    "minutes": 0,
    "code": 422
}

CI build failing

https://travis-ci.org/cloudconvert/cloudconvert-node/jobs/314228942

> [email protected] test /home/travis/build/cloudconvert/cloudconvert-node
> jshint lib tests && node node_modules/.bin/mocha tests/unit
/home/travis/build/cloudconvert/cloudconvert-node/node_modules/mocha/node_modules/supports-color/index.js:2
const os = require('os');
^^^^^
SyntaxError: Use of const in strict mode.
    at exports.runInThisContext (vm.js:73:16)
    at Module._compile (module.js:443:25)
    at Object.Module._extensions..js (module.js:478:10)
    at Module.load (module.js:355:32)
    at Function.Module._load (module.js:310:12)
    at Module.require (module.js:365:17)
    at require (module.js:384:17)
    at Object.<anonymous> (/home/travis/build/cloudconvert/cloudconvert-node/node_modules/mocha/lib/reporters/base.js:11:46)
    at Module._compile (module.js:460:26)
    at Object.Module._extensions..js (module.js:478:10)
npm ERR! Test failed.  See above for more details.

Allow defining if zip file on Cloudconvert should be created when using S3

We don't want any files stored on Cloudconvert, just stored straight to our bucket. Having the files display on the dashboard is a security concern.

Just passing in a param to ignore the uploaded file in the Cloudconvert system would be the easiest I think. output.save seems to not apply here?

ApiError should include full body of response

Hi!
We're extensively using CloudConvert for converting PDF files.

We're getting bunch of "Copying of text from this document is not allowed." errors when calling cloudconvert.get(URL TO PROCESS).

It's fine but it'd be superb useful if the whole response body be included in the Error object.

Just like adding this line: https://github.com/cloudconvert/cloudconvert-node/blob/master/lib/api.js#L42 :

apierror.body = body;

That way we will be able to get full status of failed conversion and for example use it in our logs.

Thanks!

How to use api link to convert html to pdf

How can I convert a local html file to a pdf file and return it to the client?

Conversion is fine, however returning it to the client vs. downloading it onto my server has been the issue. Any advice? I want it to download for the client, much like the api console example does when you upload a file manually.

Issue with webhook verify function

Hello, I'm trying to verify the webhook signature with the "verify" function.

I checked twice that my api key and secret key are valid.

The function always returns false, did I miss something?

import { NextFunction, Response } from "express";
import { IRequest } from "../../interfaces";
import { handleErrorResponse, NOT_ALLOWED_ERROR } from "../../utils/handleError";
import CloudConvert from 'cloudconvert';

function verify(payloadString: string, signature: string) {
  const cloudConvert = new CloudConvert(process.env.CLOUDCONVERT_API_KEY);
  const signingSecret = process.env.CLOUDCONVERT_WEBHOOK_SECRET_KEY;

  return cloudConvert.webhooks.verify(
    payloadString,
    signature,
    signingSecret
  );
}

export function checkWebhookHeaderSignature(req: IRequest, res: Response, next: NextFunction) {
  if (!req.headers["cloudconvert-signature"]) {
    const err = handleErrorResponse(NOT_ALLOWED_ERROR);
    return res.status(err.statusCode).json(err);
  }

  const cloudConvertSignatureValue: string = req.headers["cloudconvert-signature"].toString();
  const body: string = JSON.stringify(req.body);

  if (!verify(body, cloudConvertSignatureValue)) {
    console.log('wrong')
    const err = handleErrorResponse(NOT_ALLOWED_ERROR);
    return res.status(err.statusCode).json(err);
  }

  return next();
}

Browser upload

Is it possible to use the option for uploading via the browser? The example in the documentation seems to be server side?

Callback sometimes called twice when converting to and from S3

I'm converting multipage PDF to jpeg and on a 2 page PDF, the callback is sometimes called twice.

Here is the code I use:

cloudconvert.convert({ "inputformat": "pdf", "outputformat": "jpg", "input": { "s3": { "accesskeyid": cloudConvertDownloadUserId, "secretaccesskey": cloudConvertDownloadSecretAccessKey, "bucket": S3_BUCKET } }, "file": presentation.pdfURL.replace("https://" + S3_BUCKET + ".s3.amazonaws.com/", ''), "output": { "s3": { "accesskeyid": cloudConvertUploadUserId, "secretaccesskey": cloudConvertUploadSecretAccessKey, "bucket": S3_BUCKET, "path": s3SlidesFolder + "/", "acl": "public-read" } } }, function (err, process) {console.log("complete")});

Hook into localhost

Since there is no cloudconvert github and we are using node.js in our project I'm going to ask it here.

Is there any way we can hook (https://cloudconvert.com/api/hooks) into localhost or use localhost as callback for development purpose?

Thanks in advance.

Progress only being called once

Hi,

I'm trying to track the progress of the conversions I'm making. Problem is that this progress event only gets called once.
Here's the code I'm throwing to get the progress

cloudConvert.createProcess({
    inputformat: getExtension(signedUrl),
    outputformat: 'jpg',
}, function (error, process) {

    if (error) {
        console.log(error);
        return;
    }

    process.start(
        {
            converteroptions: fileObj.converteroptions,
            input: {
                s3: {
                    accesskeyid: awsCredentials.accessKeyId,
                    secretaccesskey: awsCredentials.secretAccessKey,
                    bucket: fileObj.bucket
                }
            },
            file: fileObj.key,
            outputformat: "jpg",
            output: {
                s3: {
                    accesskeyid: awsCredentials.accessKeyId,
                    secretaccesskey: awsCredentials.secretAccessKey,
                    bucket: fileObj.bucket,
                    path: getFilename(fileObj.key) + '/'
                }
            }
        }, (err) => {
            if (err) {
                console.log(err);
                return false;
            } else {
                process.wait((e) => {
                    if (e) {
                        console.log(e);
                        return false;
                    }
                    process.on('progress', data => {
                        console.log('Progress : ' + Math.floor(data.percent) + '%')
                    })
                }, 100);

            }
        })

});

Am I doing something wrong here ?
I get

Progress: (3-4-5)% // either one of these numbers

and then nothing else.
I tried to increase the interval at which the event is emitted, still nothing.

Allow passing in mimetype

Instead of having to maintain a list of "short types" internally in consuming apps, inputformat and outputformat should accept a valid mimetype.

Get file info (not yet | no longer) available

API v1 supported getting file information.
I see no way how to do this with v2. Also, v1 returns a 500 error nowadays.

Is this feature silently being discontinued? If so, please provide further information on why this is the case. Also, please take down the documentation linked above as it is outdated then.

If the feature is still supported, please
a) fix the issue with v1 and
b) provide the documentation & implementation.

cannot upload files

I do following things:
create a new job

	const job = await cloudConvert.jobs.create({
				tasks: {
					'upload-my-file': {
						operation: 'import/upload'
					},
					'convert-my-file': {
						operation: 'convert',
						input: ['upload-my-file'],
						input_format: inputformat,
						output_format: outputformat
						// timeout: 60
					},
					'export-my-file': {
						operation: 'export/url',
						input: ['convert-my-file']
					}
				}
			});

upload file

			const uploadTask = job.tasks.filter(task => task.name === 'upload-my-file')[0];
			const readable = streamifier.createReadStream(fileContent);

			await cloudConvert.tasks.upload(uploadTask, readable);

This is sent to https://storage.de.cloud.ovh.net/v1/AUTH_b2cffe8f45324c2bba39e8db1aedb58f/cloudconvert-files/823bfe12-0dc3-4356-991b-b95070cd7101/ but the API always returns a redirect like this:

  path: '/v2/upload/redirect/823bfe12-0dc3-4356-991b-b95070cd7101?status=400&message=no%20files%20to%20process',

it says "no files to process", but not quite sure why. There is a file, the readbable is also perfectly fine:

readable MultiStream {
  _readableState: ReadableState {
    objectMode: false,
    highWaterMark: 16384,
    buffer: BufferList { head: null, tail: null, length: 0 },
    length: 0,
    pipes: null,
    pipesCount: 0,
    flowing: null,
    ended: false,
    endEmitted: false,
    reading: false,
    sync: true,
    needReadable: false,
    emittedReadable: false,
    readableListening: false,
    resumeScheduled: false,
    emitClose: true,
    autoDestroy: false,
    destroyed: false,
    defaultEncoding: 'utf8',
    awaitDrainWriters: null,
    multiAwaitDrain: false,
    readingMore: false,
    decoder: null,
    encoding: null,
    [Symbol(kPaused)]: null
  },
  readable: true,
  _events: [Object: null prototype] {},
  _eventsCount: 0,
  _maxListeners: undefined,
  _object: <Buffer 50 4b 03 04 14 00 08 08 08 00 8a 5b a2 50 00 00 00 00 00 00 00 00 00 00 00 00 12 00 00 00 77 6f 72 64 2f 6e 75 6d 62 65 72 69 6e 67 2e 78 6d 6c a5 93 ... 190106 more bytes>,
  [Symbol(kCapture)]: false
}

Request body larger than maxBodyLength limit

I am getting error as "Request body larger than maxBodyLength limit". While uploading file from local.

Code;

`let job = await cloudConvert.jobs.create({
"tasks": {
"upload-webm": {
"operation": "import/upload"
},
"webm-to-mp4": {
"operation": "convert",
"input_format": "webm",
"output_format": "mp4",
"engine": "ffmpeg",
"input": [
"upload-webm"
],
"video_codec": "x264",
"crf": 23,
"preset": "medium",
"subtitles_mode": "none",
"audio_codec": "aac",
"audio_bitrate": 128,
"engine_version": "4.1.4"
},
"export-mp4-to-s3": {
"operation": "export/s3",
"input": [
"webm-to-mp4"
],
"bucket": "xxxx",
"region": "xxxx",
"access_key_id": "xxxx",
"secret_access_key": "xxxx",
"key": s3FileKey,
"acl": "authenticated-read"
}
}
});

const uploadTask = job.tasks.filter(task => task.name === 'upload-webm')[0];

const inputFile = fs.createReadStream(__dirname + '/' + fileName);
console.log('InputFile =>', inputFile)

await cloudConvert.tasks.upload(uploadTask, inputFile, fileName.replace('../library/', ''));`

Below are the error details:

ERROR cloudconvert => Error: Request body larger than maxBodyLength limit

config: {
url: 'https://storage.de.cloud.ovh.net/v1xxxxxx/cloudconvert-files/a35659a7-99de-4a47-9f07-0a451d106da9/',
method: 'post',
data: FormData {
_overheadLength: 837,
_valueLength: 151,
_valuesToMeasure: [Array],
writable: true,
readable: true,
dataSize: 0,
maxDataSize: 2097152,
pauseStreams: true,
_released: true,
_streams: [Array],
_currentStream: [DelayedStream],
_insideLoop: false,
_pendingNext: false,
_boundary: '--------------------------978290168414846072467162',
_events: [Object: null prototype],
_eventsCount: 3
},
headers: {
Accept: 'application/json, text/plain, /',
'Content-Type': 'multipart/form-data; boundary=--------------------------978290168414846072467162',
Authorization: {},
'User-Agent': 'cloudconvert-node/v2 (https://github.com/cloudconvert/cloudconvert-node)',
maxContentLength: Infinity,
maxBodyLength: Infinity
},
baseURL: 'https://api.cloudconvert.com/v2/',
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
adapter: [Function: httpAdapter],
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
validateStatus: [Function: validateStatus]
},
request: <ref *1> Writable {
_writableState: WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: false,
needDrain: false,
ending: false,
ended: false,
finished: false,
destroyed: false,
decodeStrings: true,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: true,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
afterWriteTickInfo: null,
buffered: [],
bufferedIndex: 0,
allBuffers: true,
allNoop: true,
pendingcb: 0,
prefinished: false,
errorEmitted: false,
emitClose: true,
autoDestroy: true,
errored: false,
closed: false
},
_events: [Object: null prototype] {
response: [Function: handleResponse],
error: [Function: handleRequestError]
},
_eventsCount: 2,
_maxListeners: undefined,
_options: {
protocol: 'https:',
maxRedirects: 21,
maxBodyLength: 10485760,
path: '/v1/xxxxxx/cloudconvert-files/a35659a7-99de-4a47-9f07-0a451d106da9/',
method: 'POST',
headers: [Object],
agent: undefined,
agents: [Object],
auth: undefined,
hostname: 'storage.de.cloud.ovh.net',
port: null,
nativeProtocols: [Object],
pathname: '/v1/Axxxxxx/cloudconvert-files/a35659a7-99de-4a47-9f07-0a451d106da9/'
},
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 10470226,
_requestBodyBuffers: [
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object], [Object], [Object],
[Object], [Object], [Object], [Object],
... 79 more items
],
_onNativeResponse: [Function (anonymous)],
_currentRequest: ClientRequest {
_events: [Object: null prototype],
_eventsCount: 6,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: true,
_last: true,
chunkedEncoding: true,
shouldKeepAlive: false,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
_contentLength: null,
_hasBody: true,
_trailer: '',
finished: false,
_headerSent: true,
socket: [TLSSocket],
_header: 'POST /v1/xxxxx/cloudconvert-files/a35659a7-99de-4a47-9f07-0a451d106da9/ HTTP/1.1\r\n' +
'Accept: application/json, text/plain, /\r\n' +
'Content-Type: multipart/form-data; boundary=--------------------------978290168414846072467162\r\n' +
'Authorization: [object Object]\r\n' +
'User-Agent: cloudconvert-node/v2 (https://github.com/cloudconvert/cloudconvert-node)\r\n' +
'maxContentLength: Infinity\r\n' +
'maxBodyLength: Infinity\r\n' +
'Host: storage.de.cloud.ovh.net\r\n' +
'Connection: close\r\n' +
'Transfer-Encoding: chunked\r\n' +
'\r\n',
_onPendingData: [Function: noopPendingOutput],
agent: [Agent],
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
path: '/v1/xxxxxx/cloudconvert-files/a35659a7-99de-4a47-9f07-0a451d106da9/',
_ended: false,
res: null,
aborted: true,
timeoutCb: null,
upgradeOrConnect: false,
parser: [HTTPParser],
maxHeadersCount: null,
reusedSocket: false,
_redirectable: [Circular *1],
[Symbol(kCapture)]: false,
[Symbol(kNeedDrain)]: true,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype]
},
_currentUrl: 'https://storage.de.cloud.ovh.net/v1/xxxxx/a35659a7-99de-4a47-9f07-0a451d106da9/',
[Symbol(kCapture)]: false
},
response: undefined,
isAxiosError: true,
toJSON: [Function (anonymous)]
}

I have manually tried giving maxBodyLength and maxContentLength headers, but it didn't work.

Can please help here ? Thanks

Process expects stream to originate from fs.createReadStream

I have a stream that doesn't originate from the filesystem, which I want to pipe through cloudconvert. However, Process wants to stat the originating file to emit progress updates which depend on the size of the file to be uploaded. This is not possible with arbitrary streams, only with ones originating from fs.

See the following lines:
https://github.com/cloudconvert/cloudconvert-node/blob/master/lib/process.js#L126
https://github.com/cloudconvert/cloudconvert-node/blob/master/lib/process.js#L131

If necessary, I can provide a PR which solves this issue.

Non latin characters in file name

When using non-latin characters (in my case Cyrillic) in file name I end up with converted file names like this ��СТ.pdf
I am using 'input': 'download' and output': {'s3'...
It would be useful if you could add output filename option to explicitly set resulting filename or use transliteration on input file names

Progression Events , the manual way

Hi,
we have the following code:

return new Promise( (resolve, reject) => {
            process.start(
            {
                    'input': {
                        'googlecloud': {
                                 ...
                        }
                    },
                    'file': 'video1.mp4',  
                    'filename': 'input.mp4',

                    'output': {
                        'googlecloud': {
                              ...
                        }
                    }

            },(err, process) => {
                if (err) {
                    reject(err);
                } else {
                    process.wait( (err, process) => {
                        process.on('progress', data => {
                            console.log('Conversion Progress:', data);
                        })
                        if (err) {
                            reject(err);
                        } else {
                            resolve(process);
                        }
                    });

                }
            });

        });

The conversion works but I don't get any progression events fired , or do I listen at the wrong process please?

throw new Error('Output not (yet) available')

This is my code It's showing this error.

var fs = require('fs'),
cloudconvert = new (require('cloudconvert'))(api_key);

fs.createReadStream('filename.jpg')
.pipe(cloudconvert.convert({
"inputformat": "jpg",
"outputformat": "png",
"input": "upload",
"output": "dropbox"
}))
.pipe(fs.createWriteStream('filename.png'))

How to set a sandbox mode?

Hi, I'm trying to use CloudConvert api v2 and upload a file.

async convert(file: string, imageType: ImageType) {
  const cloudConvert = new CloudConvert(process.env.API_KEY);

   const job = await cloudConvert.jobs.create({
     tasks: {
      'upload': {
        operation: 'import/upload'
      }
    }
  });

  const uploadTask = job.tasks.filter(task => task.name === taskName)[0];
  const inputFile = fs.createReadStream(file);
  return await cloudConvert.tasks.upload(uploadTask, inputFile);
}

I'm trying to use sandbox (I have a sanbox API key and a whitelisted file), but I'm getting 401.
In the CloudConvert documentation there is stated that https://api.sandbox.cloudconvert.com should be used, but I can see in the response the production API is being used instead. How to point the nodejs lib to use sandbox?

Thanks

uploading issue

i am using your code.The issue is getting the images after convert the file.i upload the ppt file convert it. everthing is working fine till that but when process finished i show these all file in a container but the process shows only some file.
example : i upload the ppt file convert it on put into destination folder.In destination folder showing the 16 images but in container show only 12 images.
and while i upload docs files it makes images but shows only 3 images
thats the issue with us can anybody help as soon as
thanks.

Quick start example typo

Firstly, thanks for the library! I wanted to note that there is a bracket missing in the quick start example (see bold bracket):
fs.createReadStream('tests/input.png')
.pipe(cloudconvert.convert({
inputformat: 'png',
outputformat: 'jpg',
converteroptions: {
quality : 75,
}
}))
.pipe(fs.createWriteStream('out.jpg'))
.on('finish', function() {
console.log('Done!');
});

How to get the uploaded file(s) to s3 after conversion ?

I'm successfully converting a .pptx file into thumbnails.
Problem is that I need to have the converted files "url, bucket, key" as part of my response.
So far went through the documentation and wasn't successful in "finding" a response.

Am I missing something ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.