Git Product home page Git Product logo

multer-s3's Introduction

Multer S3

Streaming multer storage engine for AWS S3.

This project is mostly an integration piece for existing code samples from Multer's storage engine documentation with a call to S3 as the substitution piece for file system. Existing solutions I found required buffering the multipart uploads into the actual filesystem which is difficult to scale.

AWS SDK Versions

3.x.x releases of multer-s3 use AWS JavaScript SDK v3. Specifically, it uses the Upload class from @aws-sdk/lib-storage which in turn calls the modular S3Client.

2.x.x releases for multer-s3 use AWS JavaScript SDK v2 via a call to s3.upload.

Installation

npm install --save multer-s3

Usage

const { S3Client } = require('@aws-sdk/client-s3')
const express = require('express')
const multer = require('multer')
const multerS3 = require('multer-s3')

const app = express()

const s3 = new S3Client()

const upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    metadata: function (req, file, cb) {
      cb(null, {fieldName: file.fieldname});
    },
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

app.post('/upload', upload.array('photos', 3), function(req, res, next) {
  res.send('Successfully uploaded ' + req.files.length + ' files!')
})

File information

Each file contains the following information exposed by multer-s3:

Key Description Note
size Size of the file in bytes
bucket The bucket used to store the file S3Storage
key The name of the file S3Storage
acl Access control for the file S3Storage
contentType The mimetype used to upload the file S3Storage
metadata The metadata object to be sent to S3 S3Storage
location The S3 url to access the file S3Storage
etag The etagof the uploaded file in S3 S3Storage
contentDisposition The contentDisposition used to upload the file S3Storage
storageClass The storageClass to be used for the uploaded file in S3 S3Storage
versionId The versionId is an optional param returned by S3 for versioned buckets. S3Storage
contentEncoding The contentEncoding used to upload the file S3Storage

Setting ACL

ACL values can be set by passing an optional acl parameter into the multerS3 object.

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    acl: 'public-read',
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

Available options for canned ACL.

ACL Option Permissions added to ACL
private Owner gets FULL_CONTROL. No one else has access rights (default).
public-read Owner gets FULL_CONTROL. The AllUsers group gets READ access.
public-read-write Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. Granting this on a bucket is generally not recommended.
aws-exec-read Owner gets FULL_CONTROL. Amazon EC2 gets READ access to GET an Amazon Machine Image (AMI) bundle from Amazon S3.
authenticated-read Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
bucket-owner-read Object owner gets FULL_CONTROL. Bucket owner gets READ access. If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
bucket-owner-full-control Both the object owner and the bucket owner get FULL_CONTROL over the object. If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
log-delivery-write The LogDelivery group gets WRITE and READ_ACP permissions on the bucket. For more information on logs.

Setting Metadata

The metadata option is a callback that accepts the request and file, and returns a metadata object to be saved to S3.

Here is an example that stores all fields in the request body as metadata, and uses an id param as the key:

var opts = {
    s3: s3,
    bucket: config.originalsBucket,
    metadata: function (req, file, cb) {
      cb(null, Object.assign({}, req.body));
    },
    key: function (req, file, cb) {
      cb(null, req.params.id + ".jpg");
    }
  };

Setting Cache-Control header

The optional cacheControl option sets the Cache-Control HTTP header that will be sent if you're serving the files directly from S3. You can pass either a string or a function that returns a string.

Here is an example that will tell browsers and CDNs to cache the file for one year:

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    cacheControl: 'max-age=31536000',
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

Setting Custom Content-Type

The optional contentType option can be used to set Content/mime type of the file. By default the content type is set to application/octet-stream. If you want multer-s3 to automatically find the content-type of the file, use the multerS3.AUTO_CONTENT_TYPE constant. Here is an example that will detect the content type of the file being uploaded.

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    contentType: multerS3.AUTO_CONTENT_TYPE,
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

You may also use a function as the contentType, which should be of the form function(req, file, cb).

Setting StorageClass

storageClass values can be set by passing an optional storageClass parameter into the multerS3 object.

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    acl: 'public-read',
    storageClass: 'REDUCED_REDUNDANCY',
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

Setting Content-Disposition

The optional contentDisposition option can be used to set the Content-Disposition header for the uploaded file. By default, the contentDisposition isn't forwarded. As an example below, using the value attachment forces the browser to download the uploaded file instead of trying to open it.

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    acl: 'public-read',
    contentDisposition: 'attachment',
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

Using Server-Side Encryption

An overview of S3's server-side encryption can be found in the [S3 Docs] (http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html); be advised that customer-managed keys (SSE-C) is not implemented at this time.

You may use the S3 server-side encryption functionality via the optional serverSideEncryption and sseKmsKeyId parameters. Full documentation of these parameters in relation to the S3 API can be found [here] (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property) and [here] (http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html).

serverSideEncryption has two valid values: 'AES256' and 'aws:kms'. 'AES256' utilizes the S3-managed key system, while 'aws:kms' utilizes the AWS KMS system and accepts the optional sseKmsKeyId parameter to specify the key ID of the key you wish to use. Leaving sseKmsKeyId blank when 'aws:kms' is specified will use the default KMS key. Note: You must instantiate the S3 instance with signatureVersion: 'v4' in order to use KMS-managed keys [[Docs]] (http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version), and the specified key must be in the same AWS region as the S3 bucket used.

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    acl: 'authenticated-read',
    contentDisposition: 'attachment',
    serverSideEncryption: 'AES256',
    key: function(req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

Setting Content-Encoding

The optional contentEncoding option can be used to set the Content-Encoding header for the uploaded file. By default, the contentEncoding isn't forwarded. As an example below, using the value gzip, a file can be uploaded as a gzip file - and when it is downloaded, the browser will uncompress it automatically.

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    acl: 'public-read',
    contentEncoding: 'gzip',
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

You may also use a function as the contentEncoding, which should be of the form function(req, file, cb).

Testing

The tests mock all access to S3 and can be run completely offline.

npm test

multer-s3's People

Contributors

anderberge avatar briandonahue avatar franciscotfmc avatar igneosaur avatar jacobtomlinson avatar jblz avatar linkgoron avatar linusu avatar lukechilds avatar rdpacheco avatar reckonyd avatar satyajeetcmm avatar subinsebastien avatar takaitra avatar thebergamo avatar wgminer avatar woss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

multer-s3's Issues

Not at all Working

Hi,

The sample code is not at all working. I've changed with my credentials and using ACL with public-read. Please check the issue.

var upload = multer({
	storage : multerS3({
		s3 : s3,
		bucket : 'mybucket',
		ACL : 'public-read',
		metadata : function(req, file, cb) {
			cb(null, {
				fieldName : file.fieldname
			});
		},
		limits : {
			fileSize : 10 * 1024 * 1024
		},
		key : function(req, file, cb) {
			
			cb(null, Date.now().toString())
		}
	})
}).single('photo');

app.post('/upload', function(req, res, next) {
	upload(req, res, function(err) {
		res.send('Successfully uploaded files!')
	});
})

Files uploaded using package completely stored in system memory?

Hey there, was looking for something like this that would stream a file uploaded through API request and just wanted to validate the functionality of this helper package. I was curious if the file that is being uploaded through the request is being stored in local system memory completely before uploading to S3, or is this streaming the file to S3 as the API is receiving the file in parts? The reason I am curious and looking for a explicit answer is I am wondering if Ill be able to use this method of uploading to S3 if the files being uploaded are a bit larger than those of simple photos, such as small video clips. If it will store the full video clip before streaming to S3, then I fear that multiple requests to upload files would result in over memory usage faster than if each request was only storing a small amount of the file that needs to be uploaded to S3 at a time.

It could be ideal to add ACL for the uploads? :)

It would be ideal, if the users are able to pass in ACL modes, during uploads :). Something like this:

  var upload = that.s3.upload({
        Bucket: that.options.bucket,
        Key: filePath,
        ACL: 'public-read', <-- // (passed via main func)
        ContentType: contentType,
        Body: (_stream || file.stream)
      })

ref: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property

Something, we can pass it in a func. I can submit a pull request, with the change, if ok?

Cheers,
Jeremy

Delete files

Is there a support to delete files?
If yes can you please give example

uploaded file

why file is not opening in browers its only download from amazon

Overwriting

There should be a way to disable file overwriting. Maybe something like:

var upload = multer({
  storage: s3({
    bucket: 'some-bucket',
    secretAccessKey: 'some secret',
    accessKeyId: 'some key',
    region: 'us-east-1',
    overwrite: false,
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

Drop requirements for aws-sdk data and dirname

As aws-sdk auto-loads configurations and credentials from ~/.aws/ directory and the recommended way of loading credentials is by letting the aws-sdk do so, I suggest we drop the requirements for the region, secretAccessKey and accessKeyId.

The dirname option also should not be required, as it is not mandatory to upload the file to a directory (though useful for most cases).

What do you think?

  if (!opts.secretAccessKey) throw new Error('secretAccessKey is required')
  if (!opts.accessKeyId) throw new Error('accessKeyId is required')
  if (!opts.region) throw new Error('region is required')
  if (!opts.dirname) throw new Error('dirname is required')

client side encryption for upload ?

Hi,

Thanks for your code. Very inspiring for my project.

I would like to send quite big files to S3, so I am using upload with multipart but as you know aws s3 js sdk does not manage client side encryption 😒

The use cas is a browser upload a file to my server unencrypted, and while I received streams of this upload, I send these streams to s3 with s3.upload (sample here: https://devcenter.heroku.com/articles/s3-upload-node) That way, I don't have to wait for the whole download to finish before uploading to S3. That way, I can use client side encryption to my own server.

Do you think your approach/code could work for this use case (multipart upload) ? For instance, If I overide the s3.upload method to add client side encryption params.

I am worrying that s3 will not be able to recompose the file with all the differents parts if they have been encrypted on their own side.

I know it is not completely clear. But If someone understand my need, your helpful advices are welcome πŸ˜‰

Content after upload

Is just one idea, but what you think about send the buffer across the middlewares?

maybe something link: req.file.buffer like we have in MemoryStorage.

In my case this is useful, but I don't know if this are useful for keep in the public api of multer-s3

Can't set Cache-Control header

As far as I can tell there is no way to set a Cache-Control header from the MulterS3 parameters. I really really don't wanna make a separate request to S3 just to update this setting πŸ˜†

Should be pretty easy to bake this into MulterS3, would you accept a PR for this?

I'm thinking an optional parameter, something like this:

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    cacheControl: 'max-age=31536000',
    acl: 'public-read',
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
});

Allow dynamic s3 instance/config?

I need to to be able to change s3 credentials depending on request. If you make s3 option a function then a new instance can be created and returned if required. Does it work?

Upload progress event

Files are being uploaded successfully to S3 but I am unable to show any progress bar events in browser.

What must I do to get progress events ?

File param not recognized.

Hello,

I've implemented setup for multer-s3 exactly as you have demonstrated. It looks like file uploading to s3 is working! However, the file object passed to the metadata and key functions does not have a key or location, even though it is successfully uploading.

Ultimately, my goal is to upload the file, but also return the filepath as a json response back down to my client. How would you recommend doing this? Why does the file object seem to be empty, and what are the metadata and key functions even meant to do?

Thanks,
Adam

var upload = this.s3.upload(params) TypeError: this.s3.upload is not a function

aws.config.update({
    secretAccessKey: 'key',
    accessKeyId: 'secret',
});

var s3 = new aws.S3()

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'pdsfiles',
    metadata: function (req, file, cb) {
      cb(null, {fieldName: file.fieldname});
    },
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

This is what the error looks like:

TypeError: this.s3.upload is not a function
at S3Storage. (/home/simran/Downloads/NODE-BACKEND/node_modules/multer-s3/index.js:172:26)
at /home/simran/Downloads/NODE-BACKEND/node_modules/multer-s3/index.js:58:10
at S3Storage.getContentType (/home/simran/Downloads/NODE-BACKEND/node_modules/multer-s3/index.js:8:5)
at /home/simran/Downloads/NODE-BACKEND/node_modules/multer-s3/index.js:55:13
at end (/home/simran/Downloads/NODE-BACKEND/node_modules/run-parallel/index.js:16:15)
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)

Call:

router.put('/updatePatternGrading/:pattern_number', upload.any(), function(req, res){
     console.log("request obj after manipulation", req.files);
     callAPI(req, res, fn.bind(apiObj, 'updatePatternGrading'));
 })

Why do I get this ?

req.body is empty in key function

When sending a payload with a file and additional text fields I would expect req.body to contain these text field but it just returns an empty object. Is this intentional?

My use-case is I want to construct the file path based on these additional fields.

Specify key on endpoint

Hi! Thanks for this useful middleware.
I was looking for the possibility to specify a prefix in the endpoint like:

app.post('/', upload.array('images', 5, 'user_images_prefix'), req, res) => {}

Target folder

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'web-test-2016',
    acl: 'public-read',
    key: function (req, file, cb) {
      cb(null, makeid() + '_' + file.originalname)
      //Date.now().toString()
    }
  })
})

In this case, how can I set the destination folder name? Thanks.

Avoid broken files uploaded

Have a way to prevent a broken file to be uploaded?

I see in my API some files uploaded just by the half, the other part of the image is just a gray block. Maybe some bytes are lost I guess.

How I can avoid this behavior? In other cases we use a MD5, but when we're working with streams, maybe this is a complicated think.

Any ideas?

Limit size

Hi,

It's possible to fix limit size to upload?

Thkx

acl: 'public-read' gives access denied error

Without acl: 'public-read', files get uploaded.when I click the file link it says access denied.

So I added acl: 'public-read', . It gives error

AccessDenied: Access Denied

and files didn't get uploaded.

`dirname` is not documented

If I npm install multer-s3, it downloads v1.4.1. However, the documentation is for versions of multer ahead of v1.4.1 where dirname is not longer a required option.

I suggest that while [email protected] is the default on npm we bring back all the documentation needed for using the module.

It took me a while to figure out what was going on.

Get response from S3

I used it this way.

var multer = require('multer');
var s3 = require('multer-s3');

var upload = multer({
storage: s3({
dirname: 'uploads/photos',
bucket: 'testing-bucket-yousa',
secretAccessKey: 'abcd',
accessKeyId: 'zz',
region: 'us-east-1'
})
})

app.post('/admin/uploadImagetoS3', upload.array('file'), function (req, res, next) {
console.log(res)
})

what does upload.array('file') do?

Please clarify me why is dirname required, I don’t want to store the data on my local machine and how to capture the response from s3, if it was successful or not?

var s3 = new aws.S3({ /* ... */ })

where i put my my s3 access key and

where i put my my s3 access key and

**"accessKeyId": "xxxxxxxxxxxxxxxx",
"secretAccessKey": **"xxxxxxxxxxxxxx",

i get this error
AWS Missing credentials when i try send something to my S3 Bucket

npm is not installing correct version.

Duncan,
FYI - I noticed that npm is not installing the latest version of your package. The version numbers match but the index.js from npm does not match the index.js from this git repo. Using the version from this repo works great.

Steve

Default content type wrong

All files show up as application/octet-stream in S3.

This should be set to the actual content type of the file.

rendering via an S3 URL

i'm trying render an image via an S3 url in the following form:

"'https://s3.amazonaws.com/bucket-name/'+ req.file.key"

where req.file.key is my storage's dirname attribute; however, i keep getting the following error when i navigate to said URL: "This XML file does not appear to have any style information associated with it. The document tree is shown below." i'm not sure if this is an issue with multer itself or this module for storing to my s3 bucket. i do indeed have my storage acl set to 'public-read'.

if this helps, logging 'req.file' to the console returns an object with all the proper metadata, where the size is "undefined." if you could let me know what you think Duncan, that'd be great!

Generating Hash before Upload to S3

Just being curious, is it anyhow possible to hook into multer-s3 before upload and access the locally cached file?

What I like to to:

  1. Hook into before upload
  2. Calculate hash
  3. Calculate phash
  4. Check with db if this file was already uploaded
  5. If yes, stop uploading to s3 and return file reference instead

I really appreciate your help to get a quick dive into multer and multer-s3!

S3 Object Upload Cancellation

Is there any way to cancel the file upload? Say if the request is unauthorized and we don't want the file from the request to be uploaded to S3.

Handling Large Files

I am not able to upload large files using multer-s3. It is not giving me any error as well. It just doesn't upload the file, doesn't even enters the callback and gets timeout. Any way to handle uploading large files to s3 Bucket?

I am using it like this:

var uploadSingle = upload.single('uploadFile');

router.post('/uploadVideo',function(req,res,next){	
	uploadSingle(req,res,function(err){
                // doesn't come here if the file is large
	        if(err){
	            //Error Response , Error while uploading Module PDF;
	        }
	        else{
	            //handling file upload
	           // success response
	        }
	});
}

It doesnt enters the callback of uploadSingle.

File Name

I have the following code to save Files in S3. Following code I have

var express = require('express'),
    aws = require('aws-sdk'),
    bodyParser = require('body-parser'),
    multer = require('multer'),
    multerS3 = require('multer-s3');

aws.config.update({
    secretAccessKey: 'XXXXXXXXXX',
    accessKeyId: 'XXXXXXXXXX'
});

var app = express(),
    s3 = new aws.S3();

app.use(bodyParser.json());

var upload = multer({
    storage: multerS3({
        s3: s3,
        bucket: 'XXXXXXX',
        key: function (req, file, cb) {
            console.log(file);
            cb(null, Date.now()+file.originalname); 
        }
    })
});
app.post('/upload', upload.any(), function (req, res, next) {
    // Here I want to get the File Name
     res.send("Uploaded!");
 });

I want to get the FileName/KEY at the method level once the Upload is successful. I would be saving the files with Unique IDs. Once Saved, I will be saving the KEY in a Relational DB to keep a track of the Files. How can I get the file name [Date.now()+file.originalname]

Drop the dependency on s3fs

I think it would be a good improvement to drop the dependency on s3fs and instead use aws-sdk directly. This would save the extra dependencies and give us access to upstream updates quicker.

I would be happy to provide the code if no one beats me to it.

On a related note, it seems like uploading to s3 from multer is very popular. How would you feel about getting some help on maintaining this module? I would personally be happy to hop on as a maintainer. Since it's such a small module I think the most work would be to cut new releases when dependencies (probably only aws-sdk) updates, and to answer questions from users.

I'm linusu on npm πŸ˜„

How to get req.body parameters in multer s3

storage: multerS3({
s3: s3,
bucket: 'bucket',
metadata: function(req, file, cb) {
cb(null, {
fieldName: file.fieldname
});
},
key: function(req, file, cb) {
console.log('req.body', req.params.id); //not getting
console.log('req.body', req.body);
//Not getting param here that passed in api
//Need to save file on s3 at specific location i.e /foldername/filename
//But the folder name not getting from API
cb(null, file.originalname)
}
}) }).array('userFile', 1);


Above is multer s3 code


app.post('/saveData', function(req, res, next) {
upload(req, res, function(err) {
console.log('err' + err);
var status = '';
var result = '';
var link = '';
if (err) {
status = false;
} else {
status = true;
}
result = {
"status": status,
"link": link
}
});
res.send(result); });


Above code where calling multer upload function. I am parsing data in API (from Angular2, set -Content-Type": "multipart/form-data) as formdata

`let formData: FormData = new FormData();

formData.append('userFile', file);

formData.append('fileName', fileName);`

I require the req.body data from API like folder name and other, so I can put the file to specific place on S3. Need req.body data inside the key: function(req, file, cb) { of multerS3.

Mark file public

I don't see an option to pass ACL param when storing a file, is this missing ? or it can be done somehow ?

Link after upload

Have a way to referencing the link on the req object after the complete upload to s3?

Update file-type version

Hi there,

The dependency file-type is set to ^3.3.0 but they are currently at 7.2.0

I tested in my fork and it all works as expected.

Thanks

create new directory

Hi is it possible to create a new directory in the bucket before uploading? via dirname?

Thanks.

i get this error

**Missing credentials in config

Error: connect ENETUNREACH 169.254.169.254:80
at Object.exports._errnoException (util.js:870:11)
at exports._exceptionWithHostPort (util.js:893:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1061:14)**

It is possible make some kind of function for each uploaded file when posting multiple files at once?

I have following for upload files into S3

const upload = extension => multer({
  storage: multerS3({
    s3,
    bucket,
    acl,
    metadata(req, file, cb) {
      cb(null, { fieldName: file.fieldname });
    },
    ...
  }),
});

router.post(
  '/upload',
  upload('jpg').array('files', 5),
  (req, res) => {
    console.log(req.files);
    ...

It is possible to make some kind of function for each single uploading file (to write some information about file into DB before multer begin uploading next file from array('files', 5) )
(for example, if writing data into DB failed multer need to abort uploading rest of files)
or for this I need upload each file separately?

File Extension

Hi I am using your library with some success. But I was curious what your recommended way to add a file extension is? I tried the key attribute that is passed into the options but It seems the docs are outdated. Let me know how you normally handle this.

Thanks,

Jordy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.