Git Product home page Git Product logo

netlify-plugin-ttl-cache's Introduction

Netlify plugin TTL cache

A Netlify plugin for persisting immutable build assets across releases.

How it works

By default, Netlify replaces all existing static assets when publishing new releases.

For sites where assets are unique across deployments, and dynamically loaded (e.g. React.lazy) this can lead to runtime errors (e.g. chunk-load errors).

This plugin prevents this problem by allowing users to include legacy assets across releases.

Usage

Install the plugin

npm i -D netlify-plugin-ttl-cache

Add the plugin to your netlify.toml

[[plugins]]
package = "netlify-plugin-ttl-cache"
  [plugins.inputs]
  path = "build"
  ttl = 90

Inputs

path

Build output directory.

type: string

default: "build"

ttl

Maximum age (days) of files in cache.

type: number

default: 90

exclude

Regular expression string pattern for files to exclude.

type: string

default: n/a

netlify-plugin-ttl-cache's People

Contributors

andyrichardson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

netlify-plugin-ttl-cache's Issues

Avoid binaries `cp` and `rsync`

Using binaries like cp or rsync does not work for builds triggered with Netlify CLI. Those builds are run on the user's machine, which might not have cp and/or rsync installed, especially on Windows.

Would using a core Node.js fs.* method or a library like cp-file or cpy be an option instead?

File crawling performance

Crawling the publish directory might be slow for some big sites. There might be a few opportunities of optimizing it:

  • Each readdir already performs a stat syscall, so doing it again in
    const { mtime } = await stat(file);
    might be redundant
  • If no exclude input is specified, there is no need to perform a test() on the filename. Even though the default regular expression a^ should be fast and never match, it might become more expensive when performed thousands of times.
  • Directories part of exclude might not need to crawled

There might also be some potential bugs with the directory crawling. For example, if a file was a symlink to one of its parent directory, would the crawline keep running until memory is exhausted?

I am wondering whether using a tried-and-tested library like readdirp might help fix all of this, and also simplify the code? What are your thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.